Test Report: KVM_Linux_crio 19312

                    
                      5c64880be4606435f09036ce2ec4c937eccc350b:2024-07-29:35539
                    
                

Test fail (13/278)

x
+
TestAddons/parallel/Ingress (154.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-657805 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-657805 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-657805 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [29ccff4b-e3a4-41d4-bd1e-f88c2e6fb79c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [29ccff4b-e3a4-41d4-bd1e-f88c2e6fb79c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.004116171s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-657805 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-657805 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.862670781s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-657805 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-657805 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.18
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-657805 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-657805 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-657805 addons disable ingress --alsologtostderr -v=1: (7.690363329s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-657805 -n addons-657805
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-657805 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-657805 logs -n 25: (1.190799052s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-639764                                                                     | download-only-639764 | jenkins | v1.33.1 | 29 Jul 24 00:48 UTC | 29 Jul 24 00:48 UTC |
	| delete  | -p download-only-933059                                                                     | download-only-933059 | jenkins | v1.33.1 | 29 Jul 24 00:48 UTC | 29 Jul 24 00:48 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-899353 | jenkins | v1.33.1 | 29 Jul 24 00:48 UTC |                     |
	|         | binary-mirror-899353                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44815                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-899353                                                                     | binary-mirror-899353 | jenkins | v1.33.1 | 29 Jul 24 00:48 UTC | 29 Jul 24 00:48 UTC |
	| addons  | disable dashboard -p                                                                        | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:48 UTC |                     |
	|         | addons-657805                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:48 UTC |                     |
	|         | addons-657805                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-657805 --wait=true                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:48 UTC | 29 Jul 24 00:51 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-657805 addons disable                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:51 UTC | 29 Jul 24 00:51 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:51 UTC | 29 Jul 24 00:51 UTC |
	|         | addons-657805                                                                               |                      |         |         |                     |                     |
	| ip      | addons-657805 ip                                                                            | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:52 UTC | 29 Jul 24 00:52 UTC |
	| addons  | addons-657805 addons disable                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:52 UTC | 29 Jul 24 00:52 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-657805 ssh curl -s                                                                   | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:52 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-657805 ssh cat                                                                       | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:52 UTC | 29 Jul 24 00:52 UTC |
	|         | /opt/local-path-provisioner/pvc-e4f965f3-bc18-4e6c-89fd-eee01e8cf9ee_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-657805 addons disable                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:52 UTC | 29 Jul 24 00:52 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-657805 addons                                                                        | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:53 UTC | 29 Jul 24 00:53 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-657805 addons disable                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:53 UTC | 29 Jul 24 00:53 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-657805 addons                                                                        | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:53 UTC | 29 Jul 24 00:53 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:53 UTC | 29 Jul 24 00:53 UTC |
	|         | addons-657805                                                                               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:53 UTC | 29 Jul 24 00:53 UTC |
	|         | -p addons-657805                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:53 UTC | 29 Jul 24 00:53 UTC |
	|         | -p addons-657805                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-657805 addons disable                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:53 UTC | 29 Jul 24 00:53 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-657805 addons disable                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:53 UTC | 29 Jul 24 00:53 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-657805 ip                                                                            | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:54 UTC | 29 Jul 24 00:54 UTC |
	| addons  | addons-657805 addons disable                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:54 UTC | 29 Jul 24 00:54 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-657805 addons disable                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:54 UTC | 29 Jul 24 00:54 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 00:48:59
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 00:48:59.152350   17906 out.go:291] Setting OutFile to fd 1 ...
	I0729 00:48:59.152460   17906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 00:48:59.152470   17906 out.go:304] Setting ErrFile to fd 2...
	I0729 00:48:59.152475   17906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 00:48:59.152637   17906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 00:48:59.153192   17906 out.go:298] Setting JSON to false
	I0729 00:48:59.154024   17906 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1885,"bootTime":1722212254,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 00:48:59.154087   17906 start.go:139] virtualization: kvm guest
	I0729 00:48:59.156168   17906 out.go:177] * [addons-657805] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 00:48:59.157603   17906 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 00:48:59.157615   17906 notify.go:220] Checking for updates...
	I0729 00:48:59.160142   17906 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 00:48:59.161659   17906 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 00:48:59.162968   17906 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 00:48:59.164377   17906 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 00:48:59.165569   17906 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 00:48:59.167245   17906 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 00:48:59.197851   17906 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 00:48:59.199095   17906 start.go:297] selected driver: kvm2
	I0729 00:48:59.199113   17906 start.go:901] validating driver "kvm2" against <nil>
	I0729 00:48:59.199125   17906 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 00:48:59.199795   17906 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 00:48:59.199865   17906 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 00:48:59.214038   17906 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 00:48:59.214101   17906 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 00:48:59.214355   17906 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 00:48:59.214421   17906 cni.go:84] Creating CNI manager for ""
	I0729 00:48:59.214438   17906 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 00:48:59.214451   17906 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 00:48:59.214518   17906 start.go:340] cluster config:
	{Name:addons-657805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-657805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 00:48:59.214638   17906 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 00:48:59.216342   17906 out.go:177] * Starting "addons-657805" primary control-plane node in "addons-657805" cluster
	I0729 00:48:59.217454   17906 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 00:48:59.217489   17906 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 00:48:59.217498   17906 cache.go:56] Caching tarball of preloaded images
	I0729 00:48:59.217561   17906 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 00:48:59.217570   17906 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 00:48:59.217854   17906 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/config.json ...
	I0729 00:48:59.217873   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/config.json: {Name:mk09f93ef1170e1eddd5ac968b3e21a249e6a9b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:48:59.217991   17906 start.go:360] acquireMachinesLock for addons-657805: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 00:48:59.218032   17906 start.go:364] duration metric: took 28.728µs to acquireMachinesLock for "addons-657805"
	I0729 00:48:59.218060   17906 start.go:93] Provisioning new machine with config: &{Name:addons-657805 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-657805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 00:48:59.218118   17906 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 00:48:59.219791   17906 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 00:48:59.219924   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:48:59.219957   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:48:59.234255   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45305
	I0729 00:48:59.234658   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:48:59.235212   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:48:59.235226   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:48:59.235556   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:48:59.235748   17906 main.go:141] libmachine: (addons-657805) Calling .GetMachineName
	I0729 00:48:59.235885   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:48:59.236054   17906 start.go:159] libmachine.API.Create for "addons-657805" (driver="kvm2")
	I0729 00:48:59.236082   17906 client.go:168] LocalClient.Create starting
	I0729 00:48:59.236129   17906 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem
	I0729 00:48:59.632092   17906 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem
	I0729 00:48:59.811121   17906 main.go:141] libmachine: Running pre-create checks...
	I0729 00:48:59.811146   17906 main.go:141] libmachine: (addons-657805) Calling .PreCreateCheck
	I0729 00:48:59.811661   17906 main.go:141] libmachine: (addons-657805) Calling .GetConfigRaw
	I0729 00:48:59.812105   17906 main.go:141] libmachine: Creating machine...
	I0729 00:48:59.812123   17906 main.go:141] libmachine: (addons-657805) Calling .Create
	I0729 00:48:59.812323   17906 main.go:141] libmachine: (addons-657805) Creating KVM machine...
	I0729 00:48:59.813559   17906 main.go:141] libmachine: (addons-657805) DBG | found existing default KVM network
	I0729 00:48:59.814281   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:48:59.814152   17928 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d1f0}
	I0729 00:48:59.814337   17906 main.go:141] libmachine: (addons-657805) DBG | created network xml: 
	I0729 00:48:59.814360   17906 main.go:141] libmachine: (addons-657805) DBG | <network>
	I0729 00:48:59.814368   17906 main.go:141] libmachine: (addons-657805) DBG |   <name>mk-addons-657805</name>
	I0729 00:48:59.814375   17906 main.go:141] libmachine: (addons-657805) DBG |   <dns enable='no'/>
	I0729 00:48:59.814381   17906 main.go:141] libmachine: (addons-657805) DBG |   
	I0729 00:48:59.814390   17906 main.go:141] libmachine: (addons-657805) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 00:48:59.814399   17906 main.go:141] libmachine: (addons-657805) DBG |     <dhcp>
	I0729 00:48:59.814410   17906 main.go:141] libmachine: (addons-657805) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 00:48:59.814419   17906 main.go:141] libmachine: (addons-657805) DBG |     </dhcp>
	I0729 00:48:59.814432   17906 main.go:141] libmachine: (addons-657805) DBG |   </ip>
	I0729 00:48:59.814444   17906 main.go:141] libmachine: (addons-657805) DBG |   
	I0729 00:48:59.814453   17906 main.go:141] libmachine: (addons-657805) DBG | </network>
	I0729 00:48:59.814464   17906 main.go:141] libmachine: (addons-657805) DBG | 
	I0729 00:48:59.819834   17906 main.go:141] libmachine: (addons-657805) DBG | trying to create private KVM network mk-addons-657805 192.168.39.0/24...
	I0729 00:48:59.883114   17906 main.go:141] libmachine: (addons-657805) Setting up store path in /home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805 ...
	I0729 00:48:59.883147   17906 main.go:141] libmachine: (addons-657805) DBG | private KVM network mk-addons-657805 192.168.39.0/24 created
	I0729 00:48:59.883170   17906 main.go:141] libmachine: (addons-657805) Building disk image from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 00:48:59.883199   17906 main.go:141] libmachine: (addons-657805) Downloading /home/jenkins/minikube-integration/19312-9421/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 00:48:59.883221   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:48:59.883000   17928 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 00:49:00.141462   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:00.141310   17928 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa...
	I0729 00:49:00.220687   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:00.220589   17928 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/addons-657805.rawdisk...
	I0729 00:49:00.220713   17906 main.go:141] libmachine: (addons-657805) DBG | Writing magic tar header
	I0729 00:49:00.220724   17906 main.go:141] libmachine: (addons-657805) DBG | Writing SSH key tar header
	I0729 00:49:00.220795   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:00.220716   17928 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805 ...
	I0729 00:49:00.220861   17906 main.go:141] libmachine: (addons-657805) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805
	I0729 00:49:00.220880   17906 main.go:141] libmachine: (addons-657805) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805 (perms=drwx------)
	I0729 00:49:00.220896   17906 main.go:141] libmachine: (addons-657805) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines
	I0729 00:49:00.220914   17906 main.go:141] libmachine: (addons-657805) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 00:49:00.220927   17906 main.go:141] libmachine: (addons-657805) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421
	I0729 00:49:00.220940   17906 main.go:141] libmachine: (addons-657805) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 00:49:00.220951   17906 main.go:141] libmachine: (addons-657805) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines (perms=drwxr-xr-x)
	I0729 00:49:00.220963   17906 main.go:141] libmachine: (addons-657805) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube (perms=drwxr-xr-x)
	I0729 00:49:00.220976   17906 main.go:141] libmachine: (addons-657805) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421 (perms=drwxrwxr-x)
	I0729 00:49:00.220987   17906 main.go:141] libmachine: (addons-657805) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 00:49:00.220999   17906 main.go:141] libmachine: (addons-657805) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 00:49:00.221017   17906 main.go:141] libmachine: (addons-657805) Creating domain...
	I0729 00:49:00.221028   17906 main.go:141] libmachine: (addons-657805) DBG | Checking permissions on dir: /home/jenkins
	I0729 00:49:00.221038   17906 main.go:141] libmachine: (addons-657805) DBG | Checking permissions on dir: /home
	I0729 00:49:00.221048   17906 main.go:141] libmachine: (addons-657805) DBG | Skipping /home - not owner
	I0729 00:49:00.221940   17906 main.go:141] libmachine: (addons-657805) define libvirt domain using xml: 
	I0729 00:49:00.221962   17906 main.go:141] libmachine: (addons-657805) <domain type='kvm'>
	I0729 00:49:00.221971   17906 main.go:141] libmachine: (addons-657805)   <name>addons-657805</name>
	I0729 00:49:00.221977   17906 main.go:141] libmachine: (addons-657805)   <memory unit='MiB'>4000</memory>
	I0729 00:49:00.221985   17906 main.go:141] libmachine: (addons-657805)   <vcpu>2</vcpu>
	I0729 00:49:00.221991   17906 main.go:141] libmachine: (addons-657805)   <features>
	I0729 00:49:00.222000   17906 main.go:141] libmachine: (addons-657805)     <acpi/>
	I0729 00:49:00.222011   17906 main.go:141] libmachine: (addons-657805)     <apic/>
	I0729 00:49:00.222020   17906 main.go:141] libmachine: (addons-657805)     <pae/>
	I0729 00:49:00.222030   17906 main.go:141] libmachine: (addons-657805)     
	I0729 00:49:00.222038   17906 main.go:141] libmachine: (addons-657805)   </features>
	I0729 00:49:00.222046   17906 main.go:141] libmachine: (addons-657805)   <cpu mode='host-passthrough'>
	I0729 00:49:00.222058   17906 main.go:141] libmachine: (addons-657805)   
	I0729 00:49:00.222068   17906 main.go:141] libmachine: (addons-657805)   </cpu>
	I0729 00:49:00.222080   17906 main.go:141] libmachine: (addons-657805)   <os>
	I0729 00:49:00.222091   17906 main.go:141] libmachine: (addons-657805)     <type>hvm</type>
	I0729 00:49:00.222102   17906 main.go:141] libmachine: (addons-657805)     <boot dev='cdrom'/>
	I0729 00:49:00.222110   17906 main.go:141] libmachine: (addons-657805)     <boot dev='hd'/>
	I0729 00:49:00.222139   17906 main.go:141] libmachine: (addons-657805)     <bootmenu enable='no'/>
	I0729 00:49:00.222162   17906 main.go:141] libmachine: (addons-657805)   </os>
	I0729 00:49:00.222171   17906 main.go:141] libmachine: (addons-657805)   <devices>
	I0729 00:49:00.222203   17906 main.go:141] libmachine: (addons-657805)     <disk type='file' device='cdrom'>
	I0729 00:49:00.222223   17906 main.go:141] libmachine: (addons-657805)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/boot2docker.iso'/>
	I0729 00:49:00.222230   17906 main.go:141] libmachine: (addons-657805)       <target dev='hdc' bus='scsi'/>
	I0729 00:49:00.222240   17906 main.go:141] libmachine: (addons-657805)       <readonly/>
	I0729 00:49:00.222246   17906 main.go:141] libmachine: (addons-657805)     </disk>
	I0729 00:49:00.222258   17906 main.go:141] libmachine: (addons-657805)     <disk type='file' device='disk'>
	I0729 00:49:00.222282   17906 main.go:141] libmachine: (addons-657805)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 00:49:00.222298   17906 main.go:141] libmachine: (addons-657805)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/addons-657805.rawdisk'/>
	I0729 00:49:00.222309   17906 main.go:141] libmachine: (addons-657805)       <target dev='hda' bus='virtio'/>
	I0729 00:49:00.222316   17906 main.go:141] libmachine: (addons-657805)     </disk>
	I0729 00:49:00.222325   17906 main.go:141] libmachine: (addons-657805)     <interface type='network'>
	I0729 00:49:00.222331   17906 main.go:141] libmachine: (addons-657805)       <source network='mk-addons-657805'/>
	I0729 00:49:00.222337   17906 main.go:141] libmachine: (addons-657805)       <model type='virtio'/>
	I0729 00:49:00.222356   17906 main.go:141] libmachine: (addons-657805)     </interface>
	I0729 00:49:00.222376   17906 main.go:141] libmachine: (addons-657805)     <interface type='network'>
	I0729 00:49:00.222395   17906 main.go:141] libmachine: (addons-657805)       <source network='default'/>
	I0729 00:49:00.222413   17906 main.go:141] libmachine: (addons-657805)       <model type='virtio'/>
	I0729 00:49:00.222425   17906 main.go:141] libmachine: (addons-657805)     </interface>
	I0729 00:49:00.222436   17906 main.go:141] libmachine: (addons-657805)     <serial type='pty'>
	I0729 00:49:00.222448   17906 main.go:141] libmachine: (addons-657805)       <target port='0'/>
	I0729 00:49:00.222457   17906 main.go:141] libmachine: (addons-657805)     </serial>
	I0729 00:49:00.222467   17906 main.go:141] libmachine: (addons-657805)     <console type='pty'>
	I0729 00:49:00.222484   17906 main.go:141] libmachine: (addons-657805)       <target type='serial' port='0'/>
	I0729 00:49:00.222496   17906 main.go:141] libmachine: (addons-657805)     </console>
	I0729 00:49:00.222510   17906 main.go:141] libmachine: (addons-657805)     <rng model='virtio'>
	I0729 00:49:00.222533   17906 main.go:141] libmachine: (addons-657805)       <backend model='random'>/dev/random</backend>
	I0729 00:49:00.222549   17906 main.go:141] libmachine: (addons-657805)     </rng>
	I0729 00:49:00.222556   17906 main.go:141] libmachine: (addons-657805)     
	I0729 00:49:00.222564   17906 main.go:141] libmachine: (addons-657805)     
	I0729 00:49:00.222570   17906 main.go:141] libmachine: (addons-657805)   </devices>
	I0729 00:49:00.222580   17906 main.go:141] libmachine: (addons-657805) </domain>
	I0729 00:49:00.222591   17906 main.go:141] libmachine: (addons-657805) 
	I0729 00:49:00.228697   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:16:b4:f4 in network default
	I0729 00:49:00.229271   17906 main.go:141] libmachine: (addons-657805) Ensuring networks are active...
	I0729 00:49:00.229297   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:00.229989   17906 main.go:141] libmachine: (addons-657805) Ensuring network default is active
	I0729 00:49:00.230294   17906 main.go:141] libmachine: (addons-657805) Ensuring network mk-addons-657805 is active
	I0729 00:49:00.230780   17906 main.go:141] libmachine: (addons-657805) Getting domain xml...
	I0729 00:49:00.231456   17906 main.go:141] libmachine: (addons-657805) Creating domain...
	I0729 00:49:01.614082   17906 main.go:141] libmachine: (addons-657805) Waiting to get IP...
	I0729 00:49:01.615080   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:01.615445   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:01.615460   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:01.615429   17928 retry.go:31] will retry after 204.454408ms: waiting for machine to come up
	I0729 00:49:01.821896   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:01.822406   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:01.822429   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:01.822345   17928 retry.go:31] will retry after 340.902268ms: waiting for machine to come up
	I0729 00:49:02.165027   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:02.165450   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:02.165469   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:02.165426   17928 retry.go:31] will retry after 481.394629ms: waiting for machine to come up
	I0729 00:49:02.648032   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:02.648454   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:02.648483   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:02.648404   17928 retry.go:31] will retry after 440.65689ms: waiting for machine to come up
	I0729 00:49:03.091046   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:03.091475   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:03.091515   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:03.091415   17928 retry.go:31] will retry after 718.084669ms: waiting for machine to come up
	I0729 00:49:03.811506   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:03.811896   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:03.811933   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:03.811879   17928 retry.go:31] will retry after 711.527044ms: waiting for machine to come up
	I0729 00:49:04.525378   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:04.525939   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:04.526011   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:04.525864   17928 retry.go:31] will retry after 826.675486ms: waiting for machine to come up
	I0729 00:49:05.354082   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:05.354658   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:05.354685   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:05.354613   17928 retry.go:31] will retry after 1.397827758s: waiting for machine to come up
	I0729 00:49:06.753870   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:06.754272   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:06.754298   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:06.754220   17928 retry.go:31] will retry after 1.512959505s: waiting for machine to come up
	I0729 00:49:08.268435   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:08.268913   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:08.268939   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:08.268811   17928 retry.go:31] will retry after 1.714052035s: waiting for machine to come up
	I0729 00:49:09.985035   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:09.985427   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:09.985460   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:09.985385   17928 retry.go:31] will retry after 2.887581395s: waiting for machine to come up
	I0729 00:49:12.876427   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:12.876828   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:12.876853   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:12.876783   17928 retry.go:31] will retry after 3.107647028s: waiting for machine to come up
	I0729 00:49:15.986422   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:15.986834   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:15.986860   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:15.986796   17928 retry.go:31] will retry after 2.779081026s: waiting for machine to come up
	I0729 00:49:18.768270   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:18.768680   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:18.768702   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:18.768643   17928 retry.go:31] will retry after 4.387003412s: waiting for machine to come up
	I0729 00:49:23.160029   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.160691   17906 main.go:141] libmachine: (addons-657805) Found IP for machine: 192.168.39.18
	I0729 00:49:23.160715   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has current primary IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.160722   17906 main.go:141] libmachine: (addons-657805) Reserving static IP address...
	I0729 00:49:23.161246   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find host DHCP lease matching {name: "addons-657805", mac: "52:54:00:fe:86:06", ip: "192.168.39.18"} in network mk-addons-657805
	I0729 00:49:23.230979   17906 main.go:141] libmachine: (addons-657805) DBG | Getting to WaitForSSH function...
	I0729 00:49:23.231009   17906 main.go:141] libmachine: (addons-657805) Reserved static IP address: 192.168.39.18
	I0729 00:49:23.231021   17906 main.go:141] libmachine: (addons-657805) Waiting for SSH to be available...
	I0729 00:49:23.233566   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.233930   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:23.233956   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.234083   17906 main.go:141] libmachine: (addons-657805) DBG | Using SSH client type: external
	I0729 00:49:23.234139   17906 main.go:141] libmachine: (addons-657805) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa (-rw-------)
	I0729 00:49:23.234541   17906 main.go:141] libmachine: (addons-657805) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 00:49:23.234568   17906 main.go:141] libmachine: (addons-657805) DBG | About to run SSH command:
	I0729 00:49:23.234583   17906 main.go:141] libmachine: (addons-657805) DBG | exit 0
	I0729 00:49:23.371272   17906 main.go:141] libmachine: (addons-657805) DBG | SSH cmd err, output: <nil>: 
	I0729 00:49:23.371578   17906 main.go:141] libmachine: (addons-657805) KVM machine creation complete!
	I0729 00:49:23.371797   17906 main.go:141] libmachine: (addons-657805) Calling .GetConfigRaw
	I0729 00:49:23.372321   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:23.372497   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:23.372640   17906 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 00:49:23.372652   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:23.374001   17906 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 00:49:23.374018   17906 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 00:49:23.374025   17906 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 00:49:23.374032   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:23.376220   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.376562   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:23.376592   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.376719   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:23.376882   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:23.377036   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:23.377172   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:23.377367   17906 main.go:141] libmachine: Using SSH client type: native
	I0729 00:49:23.377597   17906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0729 00:49:23.377615   17906 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 00:49:23.486200   17906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 00:49:23.486225   17906 main.go:141] libmachine: Detecting the provisioner...
	I0729 00:49:23.486234   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:23.489073   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.489508   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:23.489535   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.489650   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:23.489874   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:23.490069   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:23.490241   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:23.490429   17906 main.go:141] libmachine: Using SSH client type: native
	I0729 00:49:23.490638   17906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0729 00:49:23.490651   17906 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 00:49:23.603866   17906 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 00:49:23.603929   17906 main.go:141] libmachine: found compatible host: buildroot
	I0729 00:49:23.603936   17906 main.go:141] libmachine: Provisioning with buildroot...
	I0729 00:49:23.603942   17906 main.go:141] libmachine: (addons-657805) Calling .GetMachineName
	I0729 00:49:23.604177   17906 buildroot.go:166] provisioning hostname "addons-657805"
	I0729 00:49:23.604197   17906 main.go:141] libmachine: (addons-657805) Calling .GetMachineName
	I0729 00:49:23.604369   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:23.606966   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.607381   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:23.607407   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.607588   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:23.607783   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:23.607957   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:23.608099   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:23.608244   17906 main.go:141] libmachine: Using SSH client type: native
	I0729 00:49:23.608403   17906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0729 00:49:23.608415   17906 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-657805 && echo "addons-657805" | sudo tee /etc/hostname
	I0729 00:49:23.733174   17906 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-657805
	
	I0729 00:49:23.733201   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:23.736078   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.736434   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:23.736460   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.736634   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:23.736815   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:23.736944   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:23.737049   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:23.737191   17906 main.go:141] libmachine: Using SSH client type: native
	I0729 00:49:23.737342   17906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0729 00:49:23.737356   17906 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-657805' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-657805/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-657805' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 00:49:23.855919   17906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 00:49:23.855948   17906 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 00:49:23.855994   17906 buildroot.go:174] setting up certificates
	I0729 00:49:23.856007   17906 provision.go:84] configureAuth start
	I0729 00:49:23.856028   17906 main.go:141] libmachine: (addons-657805) Calling .GetMachineName
	I0729 00:49:23.856319   17906 main.go:141] libmachine: (addons-657805) Calling .GetIP
	I0729 00:49:23.858920   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.859298   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:23.859331   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.859461   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:23.861997   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.862294   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:23.862317   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.862441   17906 provision.go:143] copyHostCerts
	I0729 00:49:23.862505   17906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 00:49:23.862625   17906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 00:49:23.862717   17906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 00:49:23.862764   17906 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.addons-657805 san=[127.0.0.1 192.168.39.18 addons-657805 localhost minikube]
	I0729 00:49:24.197977   17906 provision.go:177] copyRemoteCerts
	I0729 00:49:24.198036   17906 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 00:49:24.198058   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:24.200828   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.201280   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.201317   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.201467   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:24.201659   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:24.201852   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:24.201988   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:24.289235   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 00:49:24.313074   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 00:49:24.336065   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 00:49:24.359513   17906 provision.go:87] duration metric: took 503.489652ms to configureAuth
	I0729 00:49:24.359541   17906 buildroot.go:189] setting minikube options for container-runtime
	I0729 00:49:24.359735   17906 config.go:182] Loaded profile config "addons-657805": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 00:49:24.359821   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:24.362494   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.362859   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.362890   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.363111   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:24.363302   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:24.363454   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:24.363600   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:24.363723   17906 main.go:141] libmachine: Using SSH client type: native
	I0729 00:49:24.363882   17906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0729 00:49:24.363896   17906 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 00:49:24.630050   17906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 00:49:24.630075   17906 main.go:141] libmachine: Checking connection to Docker...
	I0729 00:49:24.630082   17906 main.go:141] libmachine: (addons-657805) Calling .GetURL
	I0729 00:49:24.631396   17906 main.go:141] libmachine: (addons-657805) DBG | Using libvirt version 6000000
	I0729 00:49:24.633293   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.633692   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.633714   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.633840   17906 main.go:141] libmachine: Docker is up and running!
	I0729 00:49:24.633856   17906 main.go:141] libmachine: Reticulating splines...
	I0729 00:49:24.633862   17906 client.go:171] duration metric: took 25.397773855s to LocalClient.Create
	I0729 00:49:24.633882   17906 start.go:167] duration metric: took 25.397829972s to libmachine.API.Create "addons-657805"
	I0729 00:49:24.633891   17906 start.go:293] postStartSetup for "addons-657805" (driver="kvm2")
	I0729 00:49:24.633900   17906 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 00:49:24.633916   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:24.634166   17906 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 00:49:24.634191   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:24.636168   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.636499   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.636531   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.636629   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:24.636802   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:24.636920   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:24.637064   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:24.720699   17906 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 00:49:24.724864   17906 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 00:49:24.724891   17906 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 00:49:24.724966   17906 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 00:49:24.724991   17906 start.go:296] duration metric: took 91.094902ms for postStartSetup
	I0729 00:49:24.725040   17906 main.go:141] libmachine: (addons-657805) Calling .GetConfigRaw
	I0729 00:49:24.725618   17906 main.go:141] libmachine: (addons-657805) Calling .GetIP
	I0729 00:49:24.728043   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.728399   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.728422   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.728645   17906 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/config.json ...
	I0729 00:49:24.728849   17906 start.go:128] duration metric: took 25.510722443s to createHost
	I0729 00:49:24.728871   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:24.731183   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.731553   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.731581   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.731720   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:24.731887   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:24.732043   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:24.732170   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:24.732322   17906 main.go:141] libmachine: Using SSH client type: native
	I0729 00:49:24.732474   17906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0729 00:49:24.732484   17906 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 00:49:24.843754   17906 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722214164.822536355
	
	I0729 00:49:24.843780   17906 fix.go:216] guest clock: 1722214164.822536355
	I0729 00:49:24.843787   17906 fix.go:229] Guest: 2024-07-29 00:49:24.822536355 +0000 UTC Remote: 2024-07-29 00:49:24.728860946 +0000 UTC m=+25.609017205 (delta=93.675409ms)
	I0729 00:49:24.843826   17906 fix.go:200] guest clock delta is within tolerance: 93.675409ms
	I0729 00:49:24.843832   17906 start.go:83] releasing machines lock for "addons-657805", held for 25.625791047s
	I0729 00:49:24.843868   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:24.844112   17906 main.go:141] libmachine: (addons-657805) Calling .GetIP
	I0729 00:49:24.846571   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.846886   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.846904   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.847114   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:24.847602   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:24.847782   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:24.847897   17906 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 00:49:24.847952   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:24.847983   17906 ssh_runner.go:195] Run: cat /version.json
	I0729 00:49:24.848007   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:24.850454   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.850653   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.850750   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.850776   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.851010   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.851030   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.851098   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:24.851279   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:24.851400   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:24.851471   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:24.851527   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:24.851659   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:24.851742   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:24.851881   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:24.932358   17906 ssh_runner.go:195] Run: systemctl --version
	I0729 00:49:24.956334   17906 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 00:49:25.109846   17906 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 00:49:25.115932   17906 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 00:49:25.116005   17906 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 00:49:25.133099   17906 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 00:49:25.133123   17906 start.go:495] detecting cgroup driver to use...
	I0729 00:49:25.133186   17906 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 00:49:25.150111   17906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 00:49:25.164090   17906 docker.go:217] disabling cri-docker service (if available) ...
	I0729 00:49:25.164151   17906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 00:49:25.177959   17906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 00:49:25.191332   17906 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 00:49:25.309961   17906 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 00:49:25.465122   17906 docker.go:233] disabling docker service ...
	I0729 00:49:25.465185   17906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 00:49:25.479885   17906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 00:49:25.492735   17906 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 00:49:25.630654   17906 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 00:49:25.752102   17906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 00:49:25.766123   17906 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 00:49:25.784482   17906 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 00:49:25.784543   17906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 00:49:25.794764   17906 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 00:49:25.794833   17906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 00:49:25.805416   17906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 00:49:25.815439   17906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 00:49:25.825390   17906 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 00:49:25.835475   17906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 00:49:25.846720   17906 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 00:49:25.863829   17906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 00:49:25.874313   17906 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 00:49:25.884419   17906 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 00:49:25.884476   17906 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 00:49:25.897228   17906 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 00:49:25.906942   17906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 00:49:26.037142   17906 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 00:49:26.175837   17906 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 00:49:26.175931   17906 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 00:49:26.180466   17906 start.go:563] Will wait 60s for crictl version
	I0729 00:49:26.180520   17906 ssh_runner.go:195] Run: which crictl
	I0729 00:49:26.184353   17906 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 00:49:26.221927   17906 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 00:49:26.222028   17906 ssh_runner.go:195] Run: crio --version
	I0729 00:49:26.248457   17906 ssh_runner.go:195] Run: crio --version
	I0729 00:49:26.276634   17906 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 00:49:26.278041   17906 main.go:141] libmachine: (addons-657805) Calling .GetIP
	I0729 00:49:26.280495   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:26.280824   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:26.280849   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:26.281038   17906 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 00:49:26.285129   17906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 00:49:26.297728   17906 kubeadm.go:883] updating cluster {Name:addons-657805 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-657805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 00:49:26.297823   17906 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 00:49:26.297869   17906 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 00:49:26.330169   17906 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 00:49:26.330236   17906 ssh_runner.go:195] Run: which lz4
	I0729 00:49:26.334003   17906 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 00:49:26.338001   17906 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 00:49:26.338030   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 00:49:27.698320   17906 crio.go:462] duration metric: took 1.364336648s to copy over tarball
	I0729 00:49:27.698400   17906 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 00:49:29.980924   17906 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.282492078s)
	I0729 00:49:29.980957   17906 crio.go:469] duration metric: took 2.282605625s to extract the tarball
	I0729 00:49:29.980967   17906 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 00:49:30.018521   17906 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 00:49:30.061246   17906 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 00:49:30.061268   17906 cache_images.go:84] Images are preloaded, skipping loading
	I0729 00:49:30.061275   17906 kubeadm.go:934] updating node { 192.168.39.18 8443 v1.30.3 crio true true} ...
	I0729 00:49:30.061367   17906 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-657805 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-657805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 00:49:30.061426   17906 ssh_runner.go:195] Run: crio config
	I0729 00:49:30.116253   17906 cni.go:84] Creating CNI manager for ""
	I0729 00:49:30.116282   17906 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 00:49:30.116297   17906 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 00:49:30.116322   17906 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.18 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-657805 NodeName:addons-657805 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 00:49:30.116587   17906 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-657805"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 00:49:30.116694   17906 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 00:49:30.126292   17906 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 00:49:30.126351   17906 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 00:49:30.135463   17906 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 00:49:30.153494   17906 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 00:49:30.171708   17906 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0729 00:49:30.188677   17906 ssh_runner.go:195] Run: grep 192.168.39.18	control-plane.minikube.internal$ /etc/hosts
	I0729 00:49:30.192597   17906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 00:49:30.204265   17906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 00:49:30.323804   17906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 00:49:30.340408   17906 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805 for IP: 192.168.39.18
	I0729 00:49:30.340435   17906 certs.go:194] generating shared ca certs ...
	I0729 00:49:30.340454   17906 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:30.340617   17906 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 00:49:30.480278   17906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt ...
	I0729 00:49:30.480309   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt: {Name:mk8fad2e722cf917c9f34cecde4889e198331a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:30.480479   17906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key ...
	I0729 00:49:30.480489   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key: {Name:mk2f62da53b8d736f082b80a4ee556be190bf299 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:30.480557   17906 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 00:49:30.648740   17906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt ...
	I0729 00:49:30.648766   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt: {Name:mk47a5e124a0b1e459d544e63af797aed9fc919c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:30.648915   17906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key ...
	I0729 00:49:30.648925   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key: {Name:mke641a7096605541c4c9bff5414852198e2f104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:30.648987   17906 certs.go:256] generating profile certs ...
	I0729 00:49:30.649040   17906 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.key
	I0729 00:49:30.649054   17906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt with IP's: []
	I0729 00:49:30.886343   17906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt ...
	I0729 00:49:30.886370   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: {Name:mkf6b9c9729eabd73c3157348dae13e531b4bde5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:30.886526   17906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.key ...
	I0729 00:49:30.886535   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.key: {Name:mkbaded4f4aa28f2843e7e83c66b94c0a6e0a24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:30.886605   17906 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.key.8590c7ba
	I0729 00:49:30.886622   17906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.crt.8590c7ba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.18]
	I0729 00:49:31.040860   17906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.crt.8590c7ba ...
	I0729 00:49:31.040884   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.crt.8590c7ba: {Name:mk26cee3f94a04e79d6ee1fb9d24deea9fa1f918 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:31.041026   17906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.key.8590c7ba ...
	I0729 00:49:31.041039   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.key.8590c7ba: {Name:mka9a2fc29885d70db24d3c0b548df291093ac2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:31.041114   17906 certs.go:381] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.crt.8590c7ba -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.crt
	I0729 00:49:31.041184   17906 certs.go:385] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.key.8590c7ba -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.key
	I0729 00:49:31.041227   17906 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/proxy-client.key
	I0729 00:49:31.041243   17906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/proxy-client.crt with IP's: []
	I0729 00:49:31.268569   17906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/proxy-client.crt ...
	I0729 00:49:31.268595   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/proxy-client.crt: {Name:mk7051e6b608fd5e24e32d0aa45888104a2365ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:31.268766   17906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/proxy-client.key ...
	I0729 00:49:31.268782   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/proxy-client.key: {Name:mk6bd729fae7e838d0eb4a8d5fd3ab3258a5b5fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:31.268992   17906 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 00:49:31.269032   17906 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 00:49:31.269068   17906 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 00:49:31.269096   17906 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 00:49:31.269691   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 00:49:31.298626   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 00:49:31.323384   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 00:49:31.347053   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 00:49:31.371926   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 00:49:31.396438   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 00:49:31.423143   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 00:49:31.449656   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 00:49:31.476261   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 00:49:31.499889   17906 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 00:49:31.516583   17906 ssh_runner.go:195] Run: openssl version
	I0729 00:49:31.522273   17906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 00:49:31.533141   17906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 00:49:31.537442   17906 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 00:49:31.537493   17906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 00:49:31.543254   17906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 00:49:31.553591   17906 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 00:49:31.557414   17906 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 00:49:31.557463   17906 kubeadm.go:392] StartCluster: {Name:addons-657805 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-657805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 00:49:31.557567   17906 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 00:49:31.557605   17906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 00:49:31.596168   17906 cri.go:89] found id: ""
	I0729 00:49:31.596248   17906 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 00:49:31.605941   17906 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 00:49:31.615106   17906 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 00:49:31.624105   17906 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 00:49:31.624125   17906 kubeadm.go:157] found existing configuration files:
	
	I0729 00:49:31.624167   17906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 00:49:31.632727   17906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 00:49:31.632781   17906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 00:49:31.641729   17906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 00:49:31.650252   17906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 00:49:31.650314   17906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 00:49:31.659385   17906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 00:49:31.668349   17906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 00:49:31.668412   17906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 00:49:31.677268   17906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 00:49:31.685973   17906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 00:49:31.686032   17906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 00:49:31.695004   17906 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 00:49:31.751842   17906 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 00:49:31.751909   17906 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 00:49:31.892868   17906 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 00:49:31.892999   17906 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 00:49:31.893140   17906 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 00:49:32.088265   17906 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 00:49:32.243869   17906 out.go:204]   - Generating certificates and keys ...
	I0729 00:49:32.243999   17906 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 00:49:32.244124   17906 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 00:49:32.251967   17906 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 00:49:32.462399   17906 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 00:49:32.533893   17906 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 00:49:32.649445   17906 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 00:49:32.763595   17906 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 00:49:32.763771   17906 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-657805 localhost] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0729 00:49:32.934533   17906 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 00:49:32.934678   17906 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-657805 localhost] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0729 00:49:33.089919   17906 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 00:49:33.160772   17906 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 00:49:33.361029   17906 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 00:49:33.361193   17906 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 00:49:33.476473   17906 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 00:49:33.789943   17906 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 00:49:33.965249   17906 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 00:49:34.140954   17906 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 00:49:34.269185   17906 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 00:49:34.269725   17906 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 00:49:34.273726   17906 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 00:49:34.407423   17906 out.go:204]   - Booting up control plane ...
	I0729 00:49:34.407587   17906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 00:49:34.407694   17906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 00:49:34.407799   17906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 00:49:34.407947   17906 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 00:49:34.408078   17906 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 00:49:34.408133   17906 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 00:49:34.431217   17906 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 00:49:34.431326   17906 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 00:49:35.431700   17906 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001409251s
	I0729 00:49:35.431796   17906 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 00:49:40.432783   17906 kubeadm.go:310] [api-check] The API server is healthy after 5.002025846s
	I0729 00:49:40.443801   17906 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 00:49:40.460370   17906 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 00:49:40.490707   17906 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 00:49:40.490930   17906 kubeadm.go:310] [mark-control-plane] Marking the node addons-657805 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 00:49:40.509553   17906 kubeadm.go:310] [bootstrap-token] Using token: 4tz30c.7n1hf4yodd1tj9r8
	I0729 00:49:40.511010   17906 out.go:204]   - Configuring RBAC rules ...
	I0729 00:49:40.511158   17906 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 00:49:40.528355   17906 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 00:49:40.541042   17906 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 00:49:40.546586   17906 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 00:49:40.551245   17906 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 00:49:40.555682   17906 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 00:49:40.837498   17906 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 00:49:41.278085   17906 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 00:49:41.837186   17906 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 00:49:41.838141   17906 kubeadm.go:310] 
	I0729 00:49:41.838233   17906 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 00:49:41.838251   17906 kubeadm.go:310] 
	I0729 00:49:41.838340   17906 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 00:49:41.838350   17906 kubeadm.go:310] 
	I0729 00:49:41.838394   17906 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 00:49:41.838473   17906 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 00:49:41.838545   17906 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 00:49:41.838554   17906 kubeadm.go:310] 
	I0729 00:49:41.838626   17906 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 00:49:41.838653   17906 kubeadm.go:310] 
	I0729 00:49:41.838738   17906 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 00:49:41.838748   17906 kubeadm.go:310] 
	I0729 00:49:41.838812   17906 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 00:49:41.838911   17906 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 00:49:41.839004   17906 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 00:49:41.839014   17906 kubeadm.go:310] 
	I0729 00:49:41.839158   17906 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 00:49:41.839237   17906 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 00:49:41.839244   17906 kubeadm.go:310] 
	I0729 00:49:41.839311   17906 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4tz30c.7n1hf4yodd1tj9r8 \
	I0729 00:49:41.839396   17906 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2259b3e93c5dd9b5daf5a1af8e350826f214305256ac858c5baa518ad685cc90 \
	I0729 00:49:41.839415   17906 kubeadm.go:310] 	--control-plane 
	I0729 00:49:41.839421   17906 kubeadm.go:310] 
	I0729 00:49:41.839489   17906 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 00:49:41.839497   17906 kubeadm.go:310] 
	I0729 00:49:41.839580   17906 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4tz30c.7n1hf4yodd1tj9r8 \
	I0729 00:49:41.839687   17906 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2259b3e93c5dd9b5daf5a1af8e350826f214305256ac858c5baa518ad685cc90 
	I0729 00:49:41.840124   17906 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 00:49:41.840193   17906 cni.go:84] Creating CNI manager for ""
	I0729 00:49:41.840210   17906 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 00:49:41.841954   17906 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 00:49:41.843023   17906 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 00:49:41.854010   17906 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 00:49:41.872370   17906 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 00:49:41.872394   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:41.872456   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-657805 minikube.k8s.io/updated_at=2024_07_29T00_49_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=addons-657805 minikube.k8s.io/primary=true
	I0729 00:49:42.009398   17906 ops.go:34] apiserver oom_adj: -16
	I0729 00:49:42.009430   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:42.509447   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:43.010415   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:43.509514   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:44.009837   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:44.510078   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:45.009708   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:45.510416   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:46.009642   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:46.510214   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:47.010060   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:47.509694   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:48.009883   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:48.510148   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:49.010385   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:49.510249   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:50.010201   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:50.510062   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:51.009494   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:51.510074   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:52.009824   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:52.510433   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:53.009851   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:53.509613   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:54.009509   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:54.509822   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:54.615836   17906 kubeadm.go:1113] duration metric: took 12.743492105s to wait for elevateKubeSystemPrivileges
	I0729 00:49:54.615869   17906 kubeadm.go:394] duration metric: took 23.058408518s to StartCluster
	I0729 00:49:54.615888   17906 settings.go:142] acquiring lock: {Name:mkb5968d4cb7e70e3ab5eb9e0fafacd5f2b8ffad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:54.616017   17906 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 00:49:54.616486   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/kubeconfig: {Name:mkfc86149281a82bb07035a854bdc5c590b97078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:54.616685   17906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 00:49:54.616709   17906 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 00:49:54.616797   17906 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0729 00:49:54.616901   17906 config.go:182] Loaded profile config "addons-657805": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 00:49:54.616932   17906 addons.go:69] Setting yakd=true in profile "addons-657805"
	I0729 00:49:54.617194   17906 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-657805"
	I0729 00:49:54.617236   17906 addons.go:234] Setting addon yakd=true in "addons-657805"
	I0729 00:49:54.617241   17906 addons.go:69] Setting ingress-dns=true in profile "addons-657805"
	I0729 00:49:54.617274   17906 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-657805"
	I0729 00:49:54.617297   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.617313   17906 addons.go:234] Setting addon ingress-dns=true in "addons-657805"
	I0729 00:49:54.616963   17906 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-657805"
	I0729 00:49:54.617414   17906 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-657805"
	I0729 00:49:54.617452   17906 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-657805"
	I0729 00:49:54.617490   17906 addons.go:69] Setting registry=true in profile "addons-657805"
	I0729 00:49:54.617524   17906 addons.go:234] Setting addon registry=true in "addons-657805"
	I0729 00:49:54.617528   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.617550   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.617585   17906 addons.go:69] Setting ingress=true in profile "addons-657805"
	I0729 00:49:54.617639   17906 addons.go:234] Setting addon ingress=true in "addons-657805"
	I0729 00:49:54.617673   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.618028   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.616969   17906 addons.go:69] Setting metrics-server=true in profile "addons-657805"
	I0729 00:49:54.618123   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.618153   17906 addons.go:234] Setting addon metrics-server=true in "addons-657805"
	I0729 00:49:54.616954   17906 addons.go:69] Setting inspektor-gadget=true in profile "addons-657805"
	I0729 00:49:54.618193   17906 addons.go:234] Setting addon inspektor-gadget=true in "addons-657805"
	I0729 00:49:54.618216   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.618290   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.618297   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.618315   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.618372   17906 addons.go:69] Setting volumesnapshots=true in profile "addons-657805"
	I0729 00:49:54.618402   17906 addons.go:234] Setting addon volumesnapshots=true in "addons-657805"
	I0729 00:49:54.618431   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.618486   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.616946   17906 addons.go:69] Setting helm-tiller=true in profile "addons-657805"
	I0729 00:49:54.618556   17906 addons.go:234] Setting addon helm-tiller=true in "addons-657805"
	I0729 00:49:54.618628   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.618632   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.618665   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.618696   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.616938   17906 addons.go:69] Setting gcp-auth=true in profile "addons-657805"
	I0729 00:49:54.618721   17906 mustload.go:65] Loading cluster: addons-657805
	I0729 00:49:54.618867   17906 addons.go:69] Setting volcano=true in profile "addons-657805"
	I0729 00:49:54.618887   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.618894   17906 addons.go:234] Setting addon volcano=true in "addons-657805"
	I0729 00:49:54.618917   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.616971   17906 addons.go:69] Setting default-storageclass=true in profile "addons-657805"
	I0729 00:49:54.619319   17906 out.go:177] * Verifying Kubernetes components...
	I0729 00:49:54.619379   17906 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-657805"
	I0729 00:49:54.619416   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.619416   17906 addons.go:69] Setting storage-provisioner=true in profile "addons-657805"
	I0729 00:49:54.619441   17906 addons.go:234] Setting addon storage-provisioner=true in "addons-657805"
	I0729 00:49:54.619469   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.619817   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.619862   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.620070   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.620094   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.619335   17906 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-657805"
	I0729 00:49:54.620413   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.620442   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.620509   17906 config.go:182] Loaded profile config "addons-657805": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 00:49:54.620640   17906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 00:49:54.620881   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.620882   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.620906   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.621186   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.619339   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.618028   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.616961   17906 addons.go:69] Setting cloud-spanner=true in profile "addons-657805"
	I0729 00:49:54.626044   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.626080   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.626291   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.626313   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.631130   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.631177   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.631347   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.631377   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.631444   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.631524   17906 addons.go:234] Setting addon cloud-spanner=true in "addons-657805"
	I0729 00:49:54.631578   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.631639   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.653081   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46857
	I0729 00:49:54.653094   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37371
	I0729 00:49:54.653547   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44291
	I0729 00:49:54.653669   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.654159   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.654189   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.654390   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.654655   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.655230   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.655261   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.655466   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42143
	I0729 00:49:54.655597   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.655622   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.657877   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.657918   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.658326   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.658377   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.658391   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.658795   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.659166   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.659217   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.659397   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.659437   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.659866   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.659883   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.660236   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.660274   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I0729 00:49:54.660613   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45317
	I0729 00:49:54.666821   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37717
	I0729 00:49:54.667218   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I0729 00:49:54.667292   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33977
	I0729 00:49:54.667361   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46203
	I0729 00:49:54.667433   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45059
	I0729 00:49:54.667560   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.667584   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.667610   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.667730   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.667778   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.667978   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.668051   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.668097   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.668151   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.668189   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.669234   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.669250   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.669359   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.669367   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.669466   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.669474   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.669577   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.669588   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.669684   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.669694   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.669794   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.669802   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.669845   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.669878   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.669917   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40327
	I0729 00:49:54.670054   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.670090   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.670223   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.670948   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.670989   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.671022   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.671430   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.671459   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.671634   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.672039   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.672067   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.672971   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.673010   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.673442   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.673465   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.673647   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.673662   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.673837   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.673850   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.674047   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.674162   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.676137   17906 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-657805"
	I0729 00:49:54.676178   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.676516   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.676542   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.683409   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.683546   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.684008   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.684055   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.684612   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.684644   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.686887   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.689967   17906 addons.go:234] Setting addon default-storageclass=true in "addons-657805"
	I0729 00:49:54.690010   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.690350   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.690386   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.701270   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41605
	I0729 00:49:54.701759   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.702664   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.702683   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.703071   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.703248   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.706622   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.709177   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33115
	I0729 00:49:54.709672   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.709719   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
	I0729 00:49:54.710212   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.710233   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.710733   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.710908   17906 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0729 00:49:54.710918   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.711176   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.711309   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.711320   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.712192   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.712271   17906 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0729 00:49:54.712284   17906 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0729 00:49:54.712301   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.712474   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.713564   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.715211   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0729 00:49:54.715728   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.716344   17906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0729 00:49:54.716359   17906 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0729 00:49:54.716378   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.716419   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39209
	I0729 00:49:54.716567   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.717153   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.717182   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.717531   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.717676   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.717841   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.717958   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.718048   17906 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0729 00:49:54.718160   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.718881   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.718897   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.719185   17906 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 00:49:54.719207   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0729 00:49:54.719225   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.719417   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.719617   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.719823   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.720866   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.720892   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.721484   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.721667   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.721819   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.721957   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.722391   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35373
	I0729 00:49:54.722453   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43177
	I0729 00:49:54.722746   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.722787   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.723469   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.723485   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.723598   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.723608   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.723838   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.724246   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.724447   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.724507   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.725563   17906 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 00:49:54.726225   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0729 00:49:54.726490   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41995
	I0729 00:49:54.726704   17906 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 00:49:54.726718   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 00:49:54.726734   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.726805   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.727145   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46213
	I0729 00:49:54.727295   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41687
	I0729 00:49:54.727411   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.727489   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.727816   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.727834   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.727963   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.727976   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.728028   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.728253   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.728450   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.728503   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.728549   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.728566   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.728591   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42807
	I0729 00:49:54.728746   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.728844   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.728872   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.728912   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.729098   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.729116   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.729172   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.729188   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.729316   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.729332   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.729628   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.729666   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.729706   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.729763   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.729816   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.730064   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.730239   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.730509   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.731158   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.731347   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.731387   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.731499   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.731662   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.732130   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.732233   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.732252   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.732287   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.732753   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.732930   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.733130   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.733497   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.733926   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I0729 00:49:54.734330   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.734416   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.734735   17906 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0729 00:49:54.734745   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.734814   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.734828   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.734994   17906 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 00:49:54.735073   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.735873   17906 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0729 00:49:54.735887   17906 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0729 00:49:54.735905   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.736117   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.736153   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.736395   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0729 00:49:54.737204   17906 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0729 00:49:54.738763   17906 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 00:49:54.738778   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0729 00:49:54.738795   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.739245   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.739296   17906 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 00:49:54.739409   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0729 00:49:54.739948   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.739969   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.740297   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.740491   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.740867   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.741009   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.741752   17906 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0729 00:49:54.741833   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0729 00:49:54.742258   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.742653   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.742671   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.742924   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.743072   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.743355   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.743500   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.744122   17906 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 00:49:54.744136   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0729 00:49:54.744150   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.745444   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0729 00:49:54.746956   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0729 00:49:54.747444   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.748021   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.748040   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.749065   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.749276   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.749344   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0729 00:49:54.749440   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.749642   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.749900   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42093
	I0729 00:49:54.750182   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I0729 00:49:54.750809   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.751414   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.751436   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.751844   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0729 00:49:54.752098   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.753301   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.753337   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.753547   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38699
	I0729 00:49:54.753660   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42095
	I0729 00:49:54.753741   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43671
	I0729 00:49:54.754253   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0729 00:49:54.754576   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.754660   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.754725   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.754781   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.755232   17906 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0729 00:49:54.755257   17906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0729 00:49:54.755274   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.755693   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.755704   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.755709   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.755720   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.755827   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.755837   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.756212   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.756250   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.756424   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.756818   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.756842   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.756862   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.756884   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.757042   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.757092   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.757820   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.758368   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.758404   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.758608   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34125
	I0729 00:49:54.758785   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.759074   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:54.759087   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:54.759260   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.759277   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:54.759294   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:54.759303   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:54.759310   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:54.759467   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:54.759478   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 00:49:54.759545   17906 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0729 00:49:54.759797   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.759815   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.760192   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.760479   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.761438   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.762526   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.762919   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.762937   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.762977   17906 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0729 00:49:54.763096   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.763211   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.763364   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.763516   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.763647   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.764648   17906 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0729 00:49:54.764773   17906 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0729 00:49:54.764786   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0729 00:49:54.764802   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.766165   17906 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 00:49:54.766185   17906 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 00:49:54.766203   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.769269   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.769708   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.769729   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.769991   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.770194   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.770372   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.770442   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.770673   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.770899   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.770915   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.771120   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.771292   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.771432   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.771588   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.774309   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40917
	I0729 00:49:54.774774   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.775249   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.775271   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.775561   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.775792   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.777250   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0729 00:49:54.777461   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.777612   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.777850   17906 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 00:49:54.777865   17906 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 00:49:54.777883   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.778070   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.778088   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.778460   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.778629   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.780645   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.780651   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46843
	I0729 00:49:54.781179   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.782248   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.782270   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.782287   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.782659   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.782692   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.782834   17906 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0729 00:49:54.782890   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.782952   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.783133   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.783236   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.783286   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.783470   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.784745   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44463
	I0729 00:49:54.784875   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.785238   17906 out.go:177]   - Using image docker.io/busybox:stable
	I0729 00:49:54.785281   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.785685   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.785698   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.785980   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.786156   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.786193   17906 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0729 00:49:54.786305   17906 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 00:49:54.786318   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0729 00:49:54.786328   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.787865   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.788470   17906 out.go:177]   - Using image docker.io/registry:2.8.3
	I0729 00:49:54.789271   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.789334   17906 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0729 00:49:54.789772   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.789806   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.789884   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.790015   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.790116   17906 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0729 00:49:54.790142   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0729 00:49:54.790163   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.790119   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.790289   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.790773   17906 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0729 00:49:54.790786   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0729 00:49:54.790798   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.793507   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.793778   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.793828   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.793846   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.794160   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.794328   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.794346   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.794444   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.794535   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.794617   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.794656   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.794716   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.794971   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.795112   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:55.072421   17906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 00:49:55.072485   17906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 00:49:55.116870   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 00:49:55.189267   17906 node_ready.go:35] waiting up to 6m0s for node "addons-657805" to be "Ready" ...
	I0729 00:49:55.193269   17906 node_ready.go:49] node "addons-657805" has status "Ready":"True"
	I0729 00:49:55.193292   17906 node_ready.go:38] duration metric: took 4.001508ms for node "addons-657805" to be "Ready" ...
	I0729 00:49:55.193300   17906 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 00:49:55.202676   17906 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sglhh" in "kube-system" namespace to be "Ready" ...
	I0729 00:49:55.265951   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 00:49:55.270689   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 00:49:55.284057   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 00:49:55.300524   17906 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0729 00:49:55.300555   17906 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0729 00:49:55.301832   17906 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0729 00:49:55.301882   17906 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0729 00:49:55.324027   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 00:49:55.339640   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 00:49:55.371991   17906 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0729 00:49:55.372023   17906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0729 00:49:55.377604   17906 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0729 00:49:55.377634   17906 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0729 00:49:55.377843   17906 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0729 00:49:55.377865   17906 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0729 00:49:55.406710   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0729 00:49:55.410123   17906 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0729 00:49:55.410150   17906 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0729 00:49:55.434351   17906 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 00:49:55.434377   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0729 00:49:55.511476   17906 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0729 00:49:55.511505   17906 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0729 00:49:55.526194   17906 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0729 00:49:55.526223   17906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0729 00:49:55.561940   17906 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0729 00:49:55.561962   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0729 00:49:55.582147   17906 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0729 00:49:55.582173   17906 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0729 00:49:55.608900   17906 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0729 00:49:55.608922   17906 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0729 00:49:55.616808   17906 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 00:49:55.616829   17906 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 00:49:55.655468   17906 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 00:49:55.655489   17906 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0729 00:49:55.755684   17906 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0729 00:49:55.755711   17906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0729 00:49:55.793529   17906 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0729 00:49:55.793565   17906 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0729 00:49:55.823152   17906 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 00:49:55.823173   17906 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 00:49:55.865314   17906 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0729 00:49:55.865342   17906 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0729 00:49:55.885258   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0729 00:49:55.897663   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 00:49:55.906400   17906 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0729 00:49:55.906418   17906 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0729 00:49:55.932495   17906 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0729 00:49:55.932516   17906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0729 00:49:55.985755   17906 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0729 00:49:55.985786   17906 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0729 00:49:55.997149   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 00:49:56.015636   17906 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0729 00:49:56.015659   17906 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0729 00:49:56.094125   17906 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0729 00:49:56.094149   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0729 00:49:56.151609   17906 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0729 00:49:56.151632   17906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0729 00:49:56.263071   17906 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 00:49:56.263092   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0729 00:49:56.293802   17906 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0729 00:49:56.293824   17906 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0729 00:49:56.363683   17906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0729 00:49:56.363707   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0729 00:49:56.407530   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0729 00:49:56.523004   17906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0729 00:49:56.523028   17906 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0729 00:49:56.657695   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 00:49:56.704167   17906 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0729 00:49:56.704197   17906 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0729 00:49:56.948050   17906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0729 00:49:56.948083   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0729 00:49:57.068094   17906 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 00:49:57.068120   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0729 00:49:57.231102   17906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0729 00:49:57.231124   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0729 00:49:57.234328   17906 pod_ready.go:102] pod "coredns-7db6d8ff4d-sglhh" in "kube-system" namespace has status "Ready":"False"
	I0729 00:49:57.463110   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 00:49:57.589959   17906 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.517440919s)
	I0729 00:49:57.589995   17906 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 00:49:57.683519   17906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 00:49:57.683549   17906 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0729 00:49:57.932817   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 00:49:58.115319   17906 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-657805" context rescaled to 1 replicas
	I0729 00:49:58.650909   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.534008215s)
	I0729 00:49:58.650968   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:58.650981   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:58.651294   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:58.651336   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:58.651363   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:58.651370   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:58.651376   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:58.651618   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:58.651625   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:58.651636   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:59.296341   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.030348008s)
	I0729 00:49:59.296395   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:59.296405   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:59.296408   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.025681314s)
	I0729 00:49:59.296433   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.012348778s)
	I0729 00:49:59.296460   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:59.296468   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:59.296477   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:59.296479   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:59.296873   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:59.296875   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:59.296884   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:59.296894   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:59.296876   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:59.296903   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:59.296905   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:59.296909   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:59.296910   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:59.296914   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:59.296914   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:59.296922   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:59.296930   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:59.296918   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:59.296972   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:59.297295   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:59.297300   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:59.297308   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:59.297315   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:59.297328   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:59.297335   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:59.297365   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:59.297384   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:59.297391   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:59.310397   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:59.310429   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:59.310678   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:59.310737   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:59.310753   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:59.708188   17906 pod_ready.go:102] pod "coredns-7db6d8ff4d-sglhh" in "kube-system" namespace has status "Ready":"False"
	I0729 00:50:00.790612   17906 pod_ready.go:92] pod "coredns-7db6d8ff4d-sglhh" in "kube-system" namespace has status "Ready":"True"
	I0729 00:50:00.790636   17906 pod_ready.go:81] duration metric: took 5.587932348s for pod "coredns-7db6d8ff4d-sglhh" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:00.790645   17906 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t65vz" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:00.863444   17906 pod_ready.go:92] pod "coredns-7db6d8ff4d-t65vz" in "kube-system" namespace has status "Ready":"True"
	I0729 00:50:00.863479   17906 pod_ready.go:81] duration metric: took 72.826436ms for pod "coredns-7db6d8ff4d-t65vz" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:00.863492   17906 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-657805" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:00.928817   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.604750306s)
	I0729 00:50:00.928874   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:00.928889   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:00.929296   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:00.929304   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:00.929317   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:00.929327   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:00.929335   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:00.929577   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:00.929627   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:00.929645   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:00.947041   17906 pod_ready.go:92] pod "etcd-addons-657805" in "kube-system" namespace has status "Ready":"True"
	I0729 00:50:00.947076   17906 pod_ready.go:81] duration metric: took 83.574911ms for pod "etcd-addons-657805" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:00.947089   17906 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-657805" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:01.013509   17906 pod_ready.go:92] pod "kube-apiserver-addons-657805" in "kube-system" namespace has status "Ready":"True"
	I0729 00:50:01.013529   17906 pod_ready.go:81] duration metric: took 66.432029ms for pod "kube-apiserver-addons-657805" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:01.013538   17906 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-657805" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:01.030554   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:01.030576   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:01.030895   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:01.030898   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:01.030923   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:01.055189   17906 pod_ready.go:92] pod "kube-controller-manager-addons-657805" in "kube-system" namespace has status "Ready":"True"
	I0729 00:50:01.055214   17906 pod_ready.go:81] duration metric: took 41.669652ms for pod "kube-controller-manager-addons-657805" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:01.055224   17906 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kvp86" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:01.112577   17906 pod_ready.go:92] pod "kube-proxy-kvp86" in "kube-system" namespace has status "Ready":"True"
	I0729 00:50:01.112598   17906 pod_ready.go:81] duration metric: took 57.368109ms for pod "kube-proxy-kvp86" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:01.112606   17906 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-657805" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:01.537549   17906 pod_ready.go:92] pod "kube-scheduler-addons-657805" in "kube-system" namespace has status "Ready":"True"
	I0729 00:50:01.537576   17906 pod_ready.go:81] duration metric: took 424.963454ms for pod "kube-scheduler-addons-657805" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:01.537585   17906 pod_ready.go:38] duration metric: took 6.344275005s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 00:50:01.537600   17906 api_server.go:52] waiting for apiserver process to appear ...
	I0729 00:50:01.537656   17906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 00:50:01.747973   17906 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0729 00:50:01.748011   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:50:01.750727   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:50:01.751237   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:50:01.751270   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:50:01.751465   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:50:01.751651   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:50:01.751855   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:50:01.752049   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:50:02.526757   17906 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0729 00:50:02.737600   17906 addons.go:234] Setting addon gcp-auth=true in "addons-657805"
	I0729 00:50:02.737663   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:50:02.737987   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:50:02.738017   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:50:02.753382   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42573
	I0729 00:50:02.753748   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:50:02.754229   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:50:02.754251   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:50:02.754636   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:50:02.755289   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:50:02.755321   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:50:02.771176   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44671
	I0729 00:50:02.771700   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:50:02.772325   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:50:02.772350   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:50:02.772839   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:50:02.773045   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:50:02.774969   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:50:02.775235   17906 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0729 00:50:02.775259   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:50:02.777809   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:50:02.778224   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:50:02.778252   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:50:02.778405   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:50:02.778582   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:50:02.778725   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:50:02.778879   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:50:03.903800   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.564128622s)
	I0729 00:50:03.903845   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.497111713s)
	I0729 00:50:03.903857   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.903865   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.903869   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.903874   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.903931   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.018635942s)
	I0729 00:50:03.903950   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.006260506s)
	I0729 00:50:03.903966   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.903977   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.903977   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.903990   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.904080   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.906894259s)
	I0729 00:50:03.904114   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.904127   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.904226   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.496666681s)
	I0729 00:50:03.904246   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.904255   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.904282   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.904303   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.904315   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.904321   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.904326   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.904331   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.904336   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.904340   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.904345   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.904307   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.904391   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.904410   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.904416   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.904418   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.246694443s)
	W0729 00:50:03.904447   17906 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 00:50:03.904465   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.904477   17906 retry.go:31] will retry after 316.845474ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 00:50:03.904480   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.904424   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.904491   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.904496   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.904498   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.904634   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.441491896s)
	I0729 00:50:03.904667   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.904675   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.904701   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.904732   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.904739   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.904748   17906 addons.go:475] Verifying addon registry=true in "addons-657805"
	I0729 00:50:03.905820   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.905879   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.905888   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.905973   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.905998   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.906005   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.906012   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.906019   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.906186   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.906222   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.906230   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.906238   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.906245   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.906385   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.906414   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.906421   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.906796   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.906843   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.906854   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.906863   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.906873   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.906929   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.906957   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.906965   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.906973   17906 addons.go:475] Verifying addon metrics-server=true in "addons-657805"
	I0729 00:50:03.908227   17906 out.go:177] * Verifying registry addon...
	I0729 00:50:03.909125   17906 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-657805 service yakd-dashboard -n yakd-dashboard
	
	I0729 00:50:03.908373   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.908395   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.909632   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.908431   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.908447   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.908464   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.909696   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.908484   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.909743   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.909750   17906 addons.go:475] Verifying addon ingress=true in "addons-657805"
	I0729 00:50:03.910641   17906 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0729 00:50:03.910856   17906 out.go:177] * Verifying ingress addon...
	I0729 00:50:03.912521   17906 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0729 00:50:03.918982   17906 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0729 00:50:03.919003   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:03.922912   17906 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0729 00:50:03.922927   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:04.222285   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 00:50:04.415637   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:04.418129   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:04.933552   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:04.934438   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:05.282528   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.349654427s)
	I0729 00:50:05.282544   17906 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.744870221s)
	I0729 00:50:05.282600   17906 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.507342053s)
	I0729 00:50:05.282621   17906 api_server.go:72] duration metric: took 10.665881371s to wait for apiserver process to appear ...
	I0729 00:50:05.282640   17906 api_server.go:88] waiting for apiserver healthz status ...
	I0729 00:50:05.282662   17906 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0729 00:50:05.282585   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:05.282749   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:05.283122   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:05.283143   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:05.283153   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:05.283163   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:05.283180   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:05.283401   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:05.283423   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:05.283434   17906 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-657805"
	I0729 00:50:05.284333   17906 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 00:50:05.285262   17906 out.go:177] * Verifying csi-hostpath-driver addon...
	I0729 00:50:05.287109   17906 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0729 00:50:05.287812   17906 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0729 00:50:05.288457   17906 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0729 00:50:05.288476   17906 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0729 00:50:05.294716   17906 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I0729 00:50:05.302957   17906 api_server.go:141] control plane version: v1.30.3
	I0729 00:50:05.302988   17906 api_server.go:131] duration metric: took 20.339506ms to wait for apiserver health ...
	I0729 00:50:05.302998   17906 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 00:50:05.310100   17906 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0729 00:50:05.310124   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:05.332140   17906 system_pods.go:59] 19 kube-system pods found
	I0729 00:50:05.332166   17906 system_pods.go:61] "coredns-7db6d8ff4d-sglhh" [3b1ee481-ea1f-4fd0-8b99-531a84047e07] Running
	I0729 00:50:05.332171   17906 system_pods.go:61] "coredns-7db6d8ff4d-t65vz" [ad130721-0b7d-4bfe-ac45-f7f12f0815b5] Running
	I0729 00:50:05.332178   17906 system_pods.go:61] "csi-hostpath-attacher-0" [3ae11817-81ae-4f2a-ab6f-60451af82417] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 00:50:05.332182   17906 system_pods.go:61] "csi-hostpath-resizer-0" [83f41608-3bd5-43db-90b4-3e748933f87f] Pending
	I0729 00:50:05.332187   17906 system_pods.go:61] "csi-hostpathplugin-xcdz6" [8cc92d3f-35c2-4eca-9b3d-065617a32154] Pending
	I0729 00:50:05.332190   17906 system_pods.go:61] "etcd-addons-657805" [e295d075-78a7-46b3-beaa-419b4195a7ae] Running
	I0729 00:50:05.332193   17906 system_pods.go:61] "kube-apiserver-addons-657805" [bdea928e-5e23-4f0c-8bd4-a2027d562a62] Running
	I0729 00:50:05.332196   17906 system_pods.go:61] "kube-controller-manager-addons-657805" [28699945-1451-442f-b75d-55c7de3e3b54] Running
	I0729 00:50:05.332202   17906 system_pods.go:61] "kube-ingress-dns-minikube" [a3d38178-b58f-4c20-aa2c-a333b13ba547] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0729 00:50:05.332206   17906 system_pods.go:61] "kube-proxy-kvp86" [5c3ed19f-0d2a-46bd-89eb-d31fa88a3ea0] Running
	I0729 00:50:05.332209   17906 system_pods.go:61] "kube-scheduler-addons-657805" [04d2e84b-63d7-4b48-a55d-bf912e2acc15] Running
	I0729 00:50:05.332214   17906 system_pods.go:61] "metrics-server-c59844bb4-5pktj" [f3d59e24-fa87-4a81-a526-dd3281cc933f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 00:50:05.332219   17906 system_pods.go:61] "nvidia-device-plugin-daemonset-q9787" [88e23009-4d91-4d63-b0ed-514cd85efcad] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0729 00:50:05.332227   17906 system_pods.go:61] "registry-656c9c8d9c-vvt4p" [c2c15540-cbdd-4d9d-93ee-242fed10a376] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 00:50:05.332234   17906 system_pods.go:61] "registry-proxy-4dnlr" [776b01e7-fab4-4418-bc4f-350a057e9cd4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 00:50:05.332240   17906 system_pods.go:61] "snapshot-controller-745499f584-7bgm5" [54414c56-b0fd-4b67-9109-d0caf1d9d941] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 00:50:05.332248   17906 system_pods.go:61] "snapshot-controller-745499f584-qtkvv" [4af9fa15-7f2e-4444-acd5-000dae3daf9b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 00:50:05.332252   17906 system_pods.go:61] "storage-provisioner" [52e2a3d2-506b-440e-b1e3-485de0fe81e5] Running
	I0729 00:50:05.332258   17906 system_pods.go:61] "tiller-deploy-6677d64bcd-ctj2p" [19ff6eb3-431f-4705-9f70-09fb802cccd1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 00:50:05.332265   17906 system_pods.go:74] duration metric: took 29.260919ms to wait for pod list to return data ...
	I0729 00:50:05.332274   17906 default_sa.go:34] waiting for default service account to be created ...
	I0729 00:50:05.344177   17906 default_sa.go:45] found service account: "default"
	I0729 00:50:05.344202   17906 default_sa.go:55] duration metric: took 11.92175ms for default service account to be created ...
	I0729 00:50:05.344211   17906 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 00:50:05.365241   17906 system_pods.go:86] 19 kube-system pods found
	I0729 00:50:05.365271   17906 system_pods.go:89] "coredns-7db6d8ff4d-sglhh" [3b1ee481-ea1f-4fd0-8b99-531a84047e07] Running
	I0729 00:50:05.365277   17906 system_pods.go:89] "coredns-7db6d8ff4d-t65vz" [ad130721-0b7d-4bfe-ac45-f7f12f0815b5] Running
	I0729 00:50:05.365284   17906 system_pods.go:89] "csi-hostpath-attacher-0" [3ae11817-81ae-4f2a-ab6f-60451af82417] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 00:50:05.365289   17906 system_pods.go:89] "csi-hostpath-resizer-0" [83f41608-3bd5-43db-90b4-3e748933f87f] Pending
	I0729 00:50:05.365295   17906 system_pods.go:89] "csi-hostpathplugin-xcdz6" [8cc92d3f-35c2-4eca-9b3d-065617a32154] Pending
	I0729 00:50:05.365299   17906 system_pods.go:89] "etcd-addons-657805" [e295d075-78a7-46b3-beaa-419b4195a7ae] Running
	I0729 00:50:05.365303   17906 system_pods.go:89] "kube-apiserver-addons-657805" [bdea928e-5e23-4f0c-8bd4-a2027d562a62] Running
	I0729 00:50:05.365308   17906 system_pods.go:89] "kube-controller-manager-addons-657805" [28699945-1451-442f-b75d-55c7de3e3b54] Running
	I0729 00:50:05.365315   17906 system_pods.go:89] "kube-ingress-dns-minikube" [a3d38178-b58f-4c20-aa2c-a333b13ba547] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0729 00:50:05.365320   17906 system_pods.go:89] "kube-proxy-kvp86" [5c3ed19f-0d2a-46bd-89eb-d31fa88a3ea0] Running
	I0729 00:50:05.365325   17906 system_pods.go:89] "kube-scheduler-addons-657805" [04d2e84b-63d7-4b48-a55d-bf912e2acc15] Running
	I0729 00:50:05.365330   17906 system_pods.go:89] "metrics-server-c59844bb4-5pktj" [f3d59e24-fa87-4a81-a526-dd3281cc933f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 00:50:05.365341   17906 system_pods.go:89] "nvidia-device-plugin-daemonset-q9787" [88e23009-4d91-4d63-b0ed-514cd85efcad] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0729 00:50:05.365365   17906 system_pods.go:89] "registry-656c9c8d9c-vvt4p" [c2c15540-cbdd-4d9d-93ee-242fed10a376] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 00:50:05.365376   17906 system_pods.go:89] "registry-proxy-4dnlr" [776b01e7-fab4-4418-bc4f-350a057e9cd4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 00:50:05.365383   17906 system_pods.go:89] "snapshot-controller-745499f584-7bgm5" [54414c56-b0fd-4b67-9109-d0caf1d9d941] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 00:50:05.365389   17906 system_pods.go:89] "snapshot-controller-745499f584-qtkvv" [4af9fa15-7f2e-4444-acd5-000dae3daf9b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 00:50:05.365395   17906 system_pods.go:89] "storage-provisioner" [52e2a3d2-506b-440e-b1e3-485de0fe81e5] Running
	I0729 00:50:05.365402   17906 system_pods.go:89] "tiller-deploy-6677d64bcd-ctj2p" [19ff6eb3-431f-4705-9f70-09fb802cccd1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 00:50:05.365410   17906 system_pods.go:126] duration metric: took 21.193907ms to wait for k8s-apps to be running ...
	I0729 00:50:05.365419   17906 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 00:50:05.365460   17906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 00:50:05.414157   17906 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0729 00:50:05.414181   17906 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0729 00:50:05.426390   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:05.431331   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:05.498476   17906 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 00:50:05.498498   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0729 00:50:05.654636   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 00:50:05.793389   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:05.921994   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:05.925658   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:06.022554   17906 system_svc.go:56] duration metric: took 657.12426ms WaitForService to wait for kubelet
	I0729 00:50:06.022582   17906 kubeadm.go:582] duration metric: took 11.405844626s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 00:50:06.022600   17906 node_conditions.go:102] verifying NodePressure condition ...
	I0729 00:50:06.022715   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.800385788s)
	I0729 00:50:06.022761   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:06.022778   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:06.023053   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:06.023137   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:06.023151   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:06.023160   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:06.023165   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:06.023454   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:06.023469   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:06.023455   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:06.025878   17906 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 00:50:06.025898   17906 node_conditions.go:123] node cpu capacity is 2
	I0729 00:50:06.025908   17906 node_conditions.go:105] duration metric: took 3.30242ms to run NodePressure ...
	I0729 00:50:06.025918   17906 start.go:241] waiting for startup goroutines ...
	I0729 00:50:06.293601   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:06.416586   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:06.422765   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:06.793842   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:06.917666   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:06.929141   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:07.310214   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:07.447937   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.793264102s)
	I0729 00:50:07.447985   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:07.447995   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:07.448280   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:07.448301   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:07.448317   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:07.448326   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:07.448591   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:07.448631   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:07.448649   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:07.450324   17906 addons.go:475] Verifying addon gcp-auth=true in "addons-657805"
	I0729 00:50:07.452164   17906 out.go:177] * Verifying gcp-auth addon...
	I0729 00:50:07.454823   17906 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0729 00:50:07.455167   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:07.455290   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:07.463873   17906 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 00:50:07.463892   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:07.793624   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:07.915107   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:07.917728   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:07.960039   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:08.292831   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:08.415231   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:08.417478   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:08.458109   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:08.793178   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:08.916001   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:08.916234   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:08.958929   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:09.294729   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:09.415465   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:09.416985   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:09.458654   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:09.794587   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:09.915367   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:09.918264   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:09.959070   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:10.293393   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:10.415968   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:10.418556   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:10.459356   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:10.794129   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:10.915321   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:10.917847   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:10.958953   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:11.300040   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:11.415937   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:11.416270   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:11.459562   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:11.800657   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:11.914921   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:11.917297   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:11.959574   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:12.294109   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:12.416589   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:12.418876   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:12.459575   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:12.793518   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:12.916046   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:12.916170   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:12.958847   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:13.293497   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:13.415135   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:13.417448   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:13.458942   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:13.793113   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:13.915240   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:13.917375   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:13.959296   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:14.293234   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:14.417093   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:14.417800   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:14.461014   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:14.793957   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:14.916945   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:14.917474   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:14.958160   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:15.295795   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:15.415615   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:15.417663   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:15.458726   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:15.794791   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:15.914927   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:15.917658   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:15.958686   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:16.294618   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:16.416010   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:16.418278   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:16.461130   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:16.794291   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:16.915839   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:16.917011   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:16.958609   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:17.294108   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:17.415621   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:17.418682   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:17.458713   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:17.794111   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:17.915813   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:17.918256   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:17.959042   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:18.293436   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:18.416250   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:18.419149   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:18.458300   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:18.794227   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:19.164722   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:19.165055   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:19.171163   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:19.293562   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:19.418728   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:19.418904   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:19.458542   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:19.794246   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:19.916056   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:19.918930   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:19.958451   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:20.293317   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:20.416685   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:20.416723   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:20.459317   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:20.793622   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:20.916769   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:20.917213   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:20.958068   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:21.389664   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:21.415553   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:21.605464   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:21.607940   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:21.793475   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:21.917011   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:21.918764   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:21.958181   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:22.293434   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:22.416380   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:22.416996   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:22.458759   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:22.795087   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:22.917084   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:22.917538   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:22.959903   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:23.293798   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:23.416124   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:23.417160   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:23.459509   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:23.795149   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:23.917065   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:23.917182   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:23.959183   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:24.293262   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:24.416084   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:24.420910   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:24.462665   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:24.792992   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:24.916880   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:24.924929   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:24.959078   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:25.294236   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:25.420902   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:25.421035   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:25.462446   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:25.793552   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:25.916724   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:25.919289   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:25.959341   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:26.293642   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:26.415660   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:26.417557   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:26.458763   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:26.794029   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:26.920985   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:26.921307   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:26.959003   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:27.293478   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:27.418196   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:27.428104   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:27.459191   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:27.795671   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:27.915489   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:27.916747   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:27.958825   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:28.293589   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:28.415504   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:28.416988   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:28.458858   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:28.792919   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:28.918390   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:28.918526   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:28.958222   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:29.293578   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:29.416206   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:29.417396   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:29.458847   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:29.795200   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:29.916876   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:29.916971   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:29.958999   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:30.293375   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:30.416241   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:30.417575   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:30.460086   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:30.794296   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:30.915960   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:30.916394   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:30.958956   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:31.293266   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:31.415865   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:31.417415   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:31.458369   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:31.793919   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:31.916660   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:31.919139   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:31.958484   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:32.293775   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:32.415211   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:32.416945   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:32.458924   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:32.793860   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:32.916683   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:32.918648   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:32.958935   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:33.293861   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:33.417304   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:33.418381   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:33.465257   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:33.794074   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:33.915644   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:33.918283   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:33.959077   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:34.296798   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:34.416340   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:34.419809   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:34.458745   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:34.793459   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:34.915002   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:34.916330   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:34.958562   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:35.294317   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:35.415501   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:35.417008   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:35.458614   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:35.794384   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:35.916392   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:35.924541   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:35.957844   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:36.293170   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:36.416604   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:36.417782   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:36.458463   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:36.793450   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:36.916337   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:36.918492   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:36.957947   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:37.293724   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:37.415291   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:37.418054   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:37.459046   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:37.803053   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:37.915500   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:37.917916   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:37.958302   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:38.293994   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:38.416247   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:38.417789   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:38.458688   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:38.795144   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:38.916731   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:38.917671   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:38.958612   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:39.294623   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:39.417705   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:39.420751   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:39.458555   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:39.793844   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:39.916525   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:39.918126   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:39.958362   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:40.311690   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:40.417256   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:40.420424   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:40.459721   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:40.795394   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:40.917614   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:40.919326   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:40.959015   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:41.294555   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:41.415712   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:41.416589   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:41.458416   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:41.794711   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:41.919512   17906 kapi.go:107] duration metric: took 38.008865162s to wait for kubernetes.io/minikube-addons=registry ...
	I0729 00:50:41.921418   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:41.958150   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:42.293731   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:42.417551   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:42.460586   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:42.813599   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:42.918442   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:42.957997   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:43.293429   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:43.417485   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:43.458940   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:43.792996   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:43.917694   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:43.958772   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:44.293552   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:44.417081   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:44.458612   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:44.793858   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:44.917862   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:44.959330   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:45.294287   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:45.417648   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:45.459006   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:45.796156   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:45.917416   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:45.960023   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:46.294992   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:46.418226   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:46.459597   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:46.795845   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:46.916916   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:46.958855   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:47.295236   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:47.417167   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:47.458611   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:47.967630   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:47.967919   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:47.969948   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:48.294251   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:48.416434   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:48.459155   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:48.794070   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:48.917450   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:48.959205   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:49.296441   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:49.416862   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:49.458740   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:49.793716   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:49.916832   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:49.958657   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:50.294215   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:50.417426   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:50.458940   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:50.794139   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:50.917446   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:50.958240   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:51.295765   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:51.417116   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:51.459661   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:51.798611   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:51.916774   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:51.959051   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:52.293663   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:52.416862   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:52.458230   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:52.793727   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:52.917266   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:52.958676   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:53.293738   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:53.418428   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:53.458865   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:54.090696   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:54.091307   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:54.091692   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:54.293911   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:54.416610   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:54.458318   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:54.801567   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:54.918057   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:54.958830   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:55.294624   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:55.417036   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:55.458698   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:55.794444   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:55.917857   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:55.957973   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:56.293175   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:56.417919   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:56.457952   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:56.792902   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:56.917274   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:56.959315   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:57.293635   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:57.416881   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:57.458743   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:57.795626   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:57.917082   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:57.958546   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:58.293663   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:58.416844   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:58.458328   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:58.793336   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:58.916763   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:58.957845   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:59.294576   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:59.419024   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:59.460116   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:59.793125   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:59.918067   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:59.959162   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:00.293544   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:00.417057   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:00.458755   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:00.794333   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:00.917099   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:00.959092   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:01.293812   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:01.417283   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:01.458552   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:01.793728   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:01.930231   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:01.959014   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:02.299911   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:02.416988   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:02.458064   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:02.794059   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:02.917891   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:02.959016   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:03.293712   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:03.418613   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:03.458661   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:03.794551   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:03.917325   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:03.959128   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:04.294619   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:04.418199   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:04.458448   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:04.800017   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:04.917067   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:04.960313   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:05.293286   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:05.417749   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:05.458251   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:05.793386   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:05.916777   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:05.958942   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:06.295168   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:06.417038   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:06.458770   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:06.794453   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:06.917259   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:06.959295   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:07.294293   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:07.417836   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:07.458733   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:07.805483   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:07.927910   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:07.959337   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:08.294340   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:08.417346   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:08.459422   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:08.793856   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:08.916594   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:08.972481   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:09.294970   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:09.417678   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:09.458336   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:09.793489   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:09.917613   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:09.958195   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:10.293407   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:10.417413   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:10.458542   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:10.794041   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:10.917108   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:10.958597   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:11.295409   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:11.416559   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:11.458522   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:11.794666   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:11.916970   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:11.958754   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:12.299614   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:12.416955   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:12.458964   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:12.794012   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:12.916925   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:12.958168   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:13.294832   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:13.417866   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:13.458888   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:13.898708   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:13.918625   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:13.958107   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:14.294154   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:14.416977   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:14.458640   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:14.794358   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:14.918356   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:14.960056   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:15.293598   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:15.438874   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:15.460177   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:15.793459   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:15.919345   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:15.959414   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:16.294615   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:16.417044   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:16.458298   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:16.794143   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:16.917157   17906 kapi.go:107] duration metric: took 1m13.004632603s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0729 00:51:16.959659   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:17.294353   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:17.459126   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:17.793568   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:17.958100   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:18.293319   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:18.459162   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:18.793370   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:18.959346   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:19.294109   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:19.458944   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:19.793988   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:19.958548   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:20.295678   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:20.458527   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:20.794127   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:20.960104   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:21.293907   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:21.458666   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:21.940387   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:21.958287   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:22.293457   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:22.462578   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:22.796470   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:22.970039   17906 kapi.go:107] duration metric: took 1m15.515214827s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0729 00:51:22.971993   17906 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-657805 cluster.
	I0729 00:51:22.973370   17906 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0729 00:51:22.974619   17906 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0729 00:51:23.293497   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:23.797195   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:24.294180   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:24.796404   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:25.295601   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:25.792253   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:26.303615   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:26.793248   17906 kapi.go:107] duration metric: took 1m21.50543284s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0729 00:51:26.795037   17906 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, cloud-spanner, metrics-server, inspektor-gadget, helm-tiller, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0729 00:51:26.796349   17906 addons.go:510] duration metric: took 1m32.179554034s for enable addons: enabled=[ingress-dns storage-provisioner nvidia-device-plugin default-storageclass storage-provisioner-rancher cloud-spanner metrics-server inspektor-gadget helm-tiller yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0729 00:51:26.796384   17906 start.go:246] waiting for cluster config update ...
	I0729 00:51:26.796400   17906 start.go:255] writing updated cluster config ...
	I0729 00:51:26.796623   17906 ssh_runner.go:195] Run: rm -f paused
	I0729 00:51:26.851922   17906 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 00:51:26.853550   17906 out.go:177] * Done! kubectl is now configured to use "addons-657805" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 00:54:29 addons-657805 crio[683]: time="2024-07-29 00:54:29.955192322Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722214469955166628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c4694b0b-d227-45aa-8b5c-db7b0765f954 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 00:54:29 addons-657805 crio[683]: time="2024-07-29 00:54:29.955837073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2bd8e371-43df-4fd0-819c-f7e72f7e9c48 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:54:29 addons-657805 crio[683]: time="2024-07-29 00:54:29.955892702Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2bd8e371-43df-4fd0-819c-f7e72f7e9c48 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:54:29 addons-657805 crio[683]: time="2024-07-29 00:54:29.956213304Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8da8b71c903c723f54ed22dd69ce83e634302237cfae0bc7c48c99b938a1a4ed,PodSandboxId:bc6ffcaf0af239d8d14f78e351a39414fd51893409b017d1671c33e37bd2e7a2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722214463368257518,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-srwb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f8c5130-5429-4ba4-b0bc-d64604463eea,},Annotations:map[string]string{io.kubernetes.container.hash: 507698f7,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f979ed0d59ceb0a3fe77e8a588fbdc216b780f146296362aee81474baf8b7b,PodSandboxId:21a7602a7970038680b9100c741077a59c15c08952ac1aa1e531ec0f3b591df4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722214323785218341,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ccff4b-e3a4-41d4-bd1e-f88c2e6fb79c,},Annotations:map[string]string{io.kubernet
es.container.hash: 565a93f8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da001d6bccbdef17d498eac5a7a0a1ba32eb0f73114e28c343fe3978772f304e,PodSandboxId:6ef848877b3decc1e1a0be43f7bd078e9cf623b71b4fb933ba98bfa7398e213e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722214290834977914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c2321700-98ca-4fb6-8
2af-408086cad702,},Annotations:map[string]string{io.kubernetes.container.hash: 45ecdcff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa55d39ae5e291484ac4f1c33579c2e91f8d2ae625b528e694db118544bbbf83,PodSandboxId:119cb273775b83b9666494584f6e5bfefcaff43c3a59e7cc4b9d1d945527b437,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722214261860896971,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7ph7w,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 360b1235-1dfd-404f-b7a9-de31a0df7101,},Annotations:map[string]string{io.kubernetes.container.hash: 2072e506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ff01203fa88086556b450df255a200917869e52399738dc6535f2623640184e,PodSandboxId:7501523e58085881e08c36cdf1b7eca319b27250c74a61cb0634b6c1495ba10e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722214261744707527,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6vcl7,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ba23da7f-5907-45b5-81a6-8c9c919a205f,},Annotations:map[string]string{io.kubernetes.container.hash: a4250ce6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8c616404fea7e5342f50b9e6045edaa77cc2c28a38474865a7ed3c3f794138,PodSandboxId:8c9063fe62de5995ec787434cc30364ade60d57ed29e2a0ca9197a8ad5b33425,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722214234127740443,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-5pktj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d59e24-fa87-4a81-a526-dd3281cc933f,},Annotations:map[string]string{io.kubernetes.container.hash: cece9cea,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e58106e1d28009e5317c0c8b0c0511dbc63cfc12df5326a4fc50e59342362f7,PodSandboxId:e10f3dcd91eed412142efed9b886a39ea8fd253ee07d5f98f59adab09704ec6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722214200906437831,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e2a3d2-506b-440e-b1e3-485de0fe81e5,},Annotations:map[string]string{io.kubernetes.container.hash: 89fbe4b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da92967e70eba7d7e1043f0dc9ca2c2df5b5218f209a76240b62e3d6fac7526b,PodSandboxId:0b021da77f062e78b72303aa3f380b3eedfde35a79146bc2e572d4a0dd4f7363,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722214198348601832,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sglhh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b1ee481-ea1f-4fd0-8b99-531a84047e07,},Annotations:map[string]string{io.kubernetes.container.hash: 84f9508e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef538b61c48a86bae81638d1752ecce8c820d316d178e6e52f8faf3a7d15e245,PodSandboxId:750fec69cfa8773120254d9d275b455e4b3a8f7e7f8a6defcd9ac68dc92385ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722214195736266949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kvp86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3ed19f-0d2a-46bd-89eb-d31fa88a3ea0,},Annotations:map[string]string{io.kubernetes.container.hash: dd20b3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebe4fbe2afb49ca9058feea4070393aa1fa31206b6877a61b6a9f8184d40346f,PodSandboxId:3f36a2408aceaabeb96ec3f00d2d37bd40ce1860d24c57725e757501a1f5fbb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722214175872105607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d4aab67e7f7f6474899aba0076081c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219ee84cb547903968b7a45cad9827f7a37d3dbbbcb50dfd16d456392c1aea67,PodSandboxId:9987febd0df8ec4413f510c772d1a2221aef49aee25ae509fb39843517ed1f50,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:386
1cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722214175855725684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5ea8da4ad09fe4ac784bd8378d5702,},Annotations:map[string]string{io.kubernetes.container.hash: 2326aee3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ba32aabad2a7c3ecaeadb54b5c6c29a332c87a5d9a00cc327d2a74154f1dde,PodSandboxId:63690cadeb68bf946669f7f55646382345e48e4c54426ea3792ac417895d159a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c700
71dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722214175860649173,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362308c5f6f5ccd5eeb5a4c232a54105,},Annotations:map[string]string{io.kubernetes.container.hash: fe954fd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb14a7eb5f1001bbd63d7b21d415b3af85d03ab4f5282396b474724e9d69b206,PodSandboxId:0ea5272a1942103b034e147d6925aa1c232e9aa45c7390e06599c5d1c4fb4a2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856
f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722214175785411192,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58a344d0d12b0547d54b3f03ac2afd2e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2bd8e371-43df-4fd0-819c-f7e72f7e9c48 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.001053977Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e307b99e-67ed-4632-98c3-0940e4a66b70 name=/runtime.v1.RuntimeService/Version
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.001355722Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e307b99e-67ed-4632-98c3-0940e4a66b70 name=/runtime.v1.RuntimeService/Version
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.002462148Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ec5d38a9-ec27-4c11-bcf9-f0d2b27484af name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.003772548Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722214470003746669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec5d38a9-ec27-4c11-bcf9-f0d2b27484af name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.004522304Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a27fa44-a046-4371-a1f1-38a9d655f006 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.004582567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a27fa44-a046-4371-a1f1-38a9d655f006 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.004882162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8da8b71c903c723f54ed22dd69ce83e634302237cfae0bc7c48c99b938a1a4ed,PodSandboxId:bc6ffcaf0af239d8d14f78e351a39414fd51893409b017d1671c33e37bd2e7a2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722214463368257518,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-srwb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f8c5130-5429-4ba4-b0bc-d64604463eea,},Annotations:map[string]string{io.kubernetes.container.hash: 507698f7,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f979ed0d59ceb0a3fe77e8a588fbdc216b780f146296362aee81474baf8b7b,PodSandboxId:21a7602a7970038680b9100c741077a59c15c08952ac1aa1e531ec0f3b591df4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722214323785218341,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ccff4b-e3a4-41d4-bd1e-f88c2e6fb79c,},Annotations:map[string]string{io.kubernet
es.container.hash: 565a93f8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da001d6bccbdef17d498eac5a7a0a1ba32eb0f73114e28c343fe3978772f304e,PodSandboxId:6ef848877b3decc1e1a0be43f7bd078e9cf623b71b4fb933ba98bfa7398e213e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722214290834977914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c2321700-98ca-4fb6-8
2af-408086cad702,},Annotations:map[string]string{io.kubernetes.container.hash: 45ecdcff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa55d39ae5e291484ac4f1c33579c2e91f8d2ae625b528e694db118544bbbf83,PodSandboxId:119cb273775b83b9666494584f6e5bfefcaff43c3a59e7cc4b9d1d945527b437,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722214261860896971,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7ph7w,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 360b1235-1dfd-404f-b7a9-de31a0df7101,},Annotations:map[string]string{io.kubernetes.container.hash: 2072e506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ff01203fa88086556b450df255a200917869e52399738dc6535f2623640184e,PodSandboxId:7501523e58085881e08c36cdf1b7eca319b27250c74a61cb0634b6c1495ba10e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722214261744707527,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6vcl7,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ba23da7f-5907-45b5-81a6-8c9c919a205f,},Annotations:map[string]string{io.kubernetes.container.hash: a4250ce6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8c616404fea7e5342f50b9e6045edaa77cc2c28a38474865a7ed3c3f794138,PodSandboxId:8c9063fe62de5995ec787434cc30364ade60d57ed29e2a0ca9197a8ad5b33425,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722214234127740443,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-5pktj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d59e24-fa87-4a81-a526-dd3281cc933f,},Annotations:map[string]string{io.kubernetes.container.hash: cece9cea,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e58106e1d28009e5317c0c8b0c0511dbc63cfc12df5326a4fc50e59342362f7,PodSandboxId:e10f3dcd91eed412142efed9b886a39ea8fd253ee07d5f98f59adab09704ec6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722214200906437831,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e2a3d2-506b-440e-b1e3-485de0fe81e5,},Annotations:map[string]string{io.kubernetes.container.hash: 89fbe4b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da92967e70eba7d7e1043f0dc9ca2c2df5b5218f209a76240b62e3d6fac7526b,PodSandboxId:0b021da77f062e78b72303aa3f380b3eedfde35a79146bc2e572d4a0dd4f7363,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722214198348601832,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sglhh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b1ee481-ea1f-4fd0-8b99-531a84047e07,},Annotations:map[string]string{io.kubernetes.container.hash: 84f9508e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef538b61c48a86bae81638d1752ecce8c820d316d178e6e52f8faf3a7d15e245,PodSandboxId:750fec69cfa8773120254d9d275b455e4b3a8f7e7f8a6defcd9ac68dc92385ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722214195736266949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kvp86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3ed19f-0d2a-46bd-89eb-d31fa88a3ea0,},Annotations:map[string]string{io.kubernetes.container.hash: dd20b3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebe4fbe2afb49ca9058feea4070393aa1fa31206b6877a61b6a9f8184d40346f,PodSandboxId:3f36a2408aceaabeb96ec3f00d2d37bd40ce1860d24c57725e757501a1f5fbb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722214175872105607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d4aab67e7f7f6474899aba0076081c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219ee84cb547903968b7a45cad9827f7a37d3dbbbcb50dfd16d456392c1aea67,PodSandboxId:9987febd0df8ec4413f510c772d1a2221aef49aee25ae509fb39843517ed1f50,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:386
1cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722214175855725684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5ea8da4ad09fe4ac784bd8378d5702,},Annotations:map[string]string{io.kubernetes.container.hash: 2326aee3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ba32aabad2a7c3ecaeadb54b5c6c29a332c87a5d9a00cc327d2a74154f1dde,PodSandboxId:63690cadeb68bf946669f7f55646382345e48e4c54426ea3792ac417895d159a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c700
71dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722214175860649173,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362308c5f6f5ccd5eeb5a4c232a54105,},Annotations:map[string]string{io.kubernetes.container.hash: fe954fd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb14a7eb5f1001bbd63d7b21d415b3af85d03ab4f5282396b474724e9d69b206,PodSandboxId:0ea5272a1942103b034e147d6925aa1c232e9aa45c7390e06599c5d1c4fb4a2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856
f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722214175785411192,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58a344d0d12b0547d54b3f03ac2afd2e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a27fa44-a046-4371-a1f1-38a9d655f006 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.043210707Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae9e007f-895f-467f-9c1c-82fe1ce049eb name=/runtime.v1.RuntimeService/Version
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.043280355Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae9e007f-895f-467f-9c1c-82fe1ce049eb name=/runtime.v1.RuntimeService/Version
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.044663074Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e374940-9a71-4619-9b3e-a44dc7677213 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.046274233Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722214470046249132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e374940-9a71-4619-9b3e-a44dc7677213 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.046854562Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5e2391a-bafd-41a9-b664-d6718b3422c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.046908737Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5e2391a-bafd-41a9-b664-d6718b3422c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.047184725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8da8b71c903c723f54ed22dd69ce83e634302237cfae0bc7c48c99b938a1a4ed,PodSandboxId:bc6ffcaf0af239d8d14f78e351a39414fd51893409b017d1671c33e37bd2e7a2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722214463368257518,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-srwb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f8c5130-5429-4ba4-b0bc-d64604463eea,},Annotations:map[string]string{io.kubernetes.container.hash: 507698f7,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f979ed0d59ceb0a3fe77e8a588fbdc216b780f146296362aee81474baf8b7b,PodSandboxId:21a7602a7970038680b9100c741077a59c15c08952ac1aa1e531ec0f3b591df4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722214323785218341,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ccff4b-e3a4-41d4-bd1e-f88c2e6fb79c,},Annotations:map[string]string{io.kubernet
es.container.hash: 565a93f8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da001d6bccbdef17d498eac5a7a0a1ba32eb0f73114e28c343fe3978772f304e,PodSandboxId:6ef848877b3decc1e1a0be43f7bd078e9cf623b71b4fb933ba98bfa7398e213e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722214290834977914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c2321700-98ca-4fb6-8
2af-408086cad702,},Annotations:map[string]string{io.kubernetes.container.hash: 45ecdcff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa55d39ae5e291484ac4f1c33579c2e91f8d2ae625b528e694db118544bbbf83,PodSandboxId:119cb273775b83b9666494584f6e5bfefcaff43c3a59e7cc4b9d1d945527b437,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722214261860896971,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7ph7w,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 360b1235-1dfd-404f-b7a9-de31a0df7101,},Annotations:map[string]string{io.kubernetes.container.hash: 2072e506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ff01203fa88086556b450df255a200917869e52399738dc6535f2623640184e,PodSandboxId:7501523e58085881e08c36cdf1b7eca319b27250c74a61cb0634b6c1495ba10e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722214261744707527,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6vcl7,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ba23da7f-5907-45b5-81a6-8c9c919a205f,},Annotations:map[string]string{io.kubernetes.container.hash: a4250ce6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8c616404fea7e5342f50b9e6045edaa77cc2c28a38474865a7ed3c3f794138,PodSandboxId:8c9063fe62de5995ec787434cc30364ade60d57ed29e2a0ca9197a8ad5b33425,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722214234127740443,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-5pktj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d59e24-fa87-4a81-a526-dd3281cc933f,},Annotations:map[string]string{io.kubernetes.container.hash: cece9cea,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e58106e1d28009e5317c0c8b0c0511dbc63cfc12df5326a4fc50e59342362f7,PodSandboxId:e10f3dcd91eed412142efed9b886a39ea8fd253ee07d5f98f59adab09704ec6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722214200906437831,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e2a3d2-506b-440e-b1e3-485de0fe81e5,},Annotations:map[string]string{io.kubernetes.container.hash: 89fbe4b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da92967e70eba7d7e1043f0dc9ca2c2df5b5218f209a76240b62e3d6fac7526b,PodSandboxId:0b021da77f062e78b72303aa3f380b3eedfde35a79146bc2e572d4a0dd4f7363,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722214198348601832,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sglhh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b1ee481-ea1f-4fd0-8b99-531a84047e07,},Annotations:map[string]string{io.kubernetes.container.hash: 84f9508e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef538b61c48a86bae81638d1752ecce8c820d316d178e6e52f8faf3a7d15e245,PodSandboxId:750fec69cfa8773120254d9d275b455e4b3a8f7e7f8a6defcd9ac68dc92385ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722214195736266949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kvp86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3ed19f-0d2a-46bd-89eb-d31fa88a3ea0,},Annotations:map[string]string{io.kubernetes.container.hash: dd20b3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebe4fbe2afb49ca9058feea4070393aa1fa31206b6877a61b6a9f8184d40346f,PodSandboxId:3f36a2408aceaabeb96ec3f00d2d37bd40ce1860d24c57725e757501a1f5fbb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722214175872105607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d4aab67e7f7f6474899aba0076081c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219ee84cb547903968b7a45cad9827f7a37d3dbbbcb50dfd16d456392c1aea67,PodSandboxId:9987febd0df8ec4413f510c772d1a2221aef49aee25ae509fb39843517ed1f50,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:386
1cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722214175855725684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5ea8da4ad09fe4ac784bd8378d5702,},Annotations:map[string]string{io.kubernetes.container.hash: 2326aee3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ba32aabad2a7c3ecaeadb54b5c6c29a332c87a5d9a00cc327d2a74154f1dde,PodSandboxId:63690cadeb68bf946669f7f55646382345e48e4c54426ea3792ac417895d159a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c700
71dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722214175860649173,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362308c5f6f5ccd5eeb5a4c232a54105,},Annotations:map[string]string{io.kubernetes.container.hash: fe954fd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb14a7eb5f1001bbd63d7b21d415b3af85d03ab4f5282396b474724e9d69b206,PodSandboxId:0ea5272a1942103b034e147d6925aa1c232e9aa45c7390e06599c5d1c4fb4a2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856
f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722214175785411192,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58a344d0d12b0547d54b3f03ac2afd2e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5e2391a-bafd-41a9-b664-d6718b3422c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.083182691Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b1acc84-b8e1-4521-914d-8387958a6986 name=/runtime.v1.RuntimeService/Version
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.083279479Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b1acc84-b8e1-4521-914d-8387958a6986 name=/runtime.v1.RuntimeService/Version
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.084182887Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=72a5ed62-5c68-4ca5-99c1-474d25afd5b9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.085621821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722214470085595063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72a5ed62-5c68-4ca5-99c1-474d25afd5b9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.086216322Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90c3d8c0-8049-4133-9ece-2d4cd543d885 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.086286950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90c3d8c0-8049-4133-9ece-2d4cd543d885 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:54:30 addons-657805 crio[683]: time="2024-07-29 00:54:30.086642033Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8da8b71c903c723f54ed22dd69ce83e634302237cfae0bc7c48c99b938a1a4ed,PodSandboxId:bc6ffcaf0af239d8d14f78e351a39414fd51893409b017d1671c33e37bd2e7a2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722214463368257518,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-srwb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f8c5130-5429-4ba4-b0bc-d64604463eea,},Annotations:map[string]string{io.kubernetes.container.hash: 507698f7,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f979ed0d59ceb0a3fe77e8a588fbdc216b780f146296362aee81474baf8b7b,PodSandboxId:21a7602a7970038680b9100c741077a59c15c08952ac1aa1e531ec0f3b591df4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722214323785218341,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ccff4b-e3a4-41d4-bd1e-f88c2e6fb79c,},Annotations:map[string]string{io.kubernet
es.container.hash: 565a93f8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da001d6bccbdef17d498eac5a7a0a1ba32eb0f73114e28c343fe3978772f304e,PodSandboxId:6ef848877b3decc1e1a0be43f7bd078e9cf623b71b4fb933ba98bfa7398e213e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722214290834977914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c2321700-98ca-4fb6-8
2af-408086cad702,},Annotations:map[string]string{io.kubernetes.container.hash: 45ecdcff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa55d39ae5e291484ac4f1c33579c2e91f8d2ae625b528e694db118544bbbf83,PodSandboxId:119cb273775b83b9666494584f6e5bfefcaff43c3a59e7cc4b9d1d945527b437,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722214261860896971,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7ph7w,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 360b1235-1dfd-404f-b7a9-de31a0df7101,},Annotations:map[string]string{io.kubernetes.container.hash: 2072e506,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ff01203fa88086556b450df255a200917869e52399738dc6535f2623640184e,PodSandboxId:7501523e58085881e08c36cdf1b7eca319b27250c74a61cb0634b6c1495ba10e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722214261744707527,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6vcl7,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ba23da7f-5907-45b5-81a6-8c9c919a205f,},Annotations:map[string]string{io.kubernetes.container.hash: a4250ce6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8c616404fea7e5342f50b9e6045edaa77cc2c28a38474865a7ed3c3f794138,PodSandboxId:8c9063fe62de5995ec787434cc30364ade60d57ed29e2a0ca9197a8ad5b33425,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722214234127740443,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-5pktj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d59e24-fa87-4a81-a526-dd3281cc933f,},Annotations:map[string]string{io.kubernetes.container.hash: cece9cea,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e58106e1d28009e5317c0c8b0c0511dbc63cfc12df5326a4fc50e59342362f7,PodSandboxId:e10f3dcd91eed412142efed9b886a39ea8fd253ee07d5f98f59adab09704ec6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722214200906437831,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e2a3d2-506b-440e-b1e3-485de0fe81e5,},Annotations:map[string]string{io.kubernetes.container.hash: 89fbe4b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da92967e70eba7d7e1043f0dc9ca2c2df5b5218f209a76240b62e3d6fac7526b,PodSandboxId:0b021da77f062e78b72303aa3f380b3eedfde35a79146bc2e572d4a0dd4f7363,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722214198348601832,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sglhh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b1ee481-ea1f-4fd0-8b99-531a84047e07,},Annotations:map[string]string{io.kubernetes.container.hash: 84f9508e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef538b61c48a86bae81638d1752ecce8c820d316d178e6e52f8faf3a7d15e245,PodSandboxId:750fec69cfa8773120254d9d275b455e4b3a8f7e7f8a6defcd9ac68dc92385ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722214195736266949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kvp86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3ed19f-0d2a-46bd-89eb-d31fa88a3ea0,},Annotations:map[string]string{io.kubernetes.container.hash: dd20b3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebe4fbe2afb49ca9058feea4070393aa1fa31206b6877a61b6a9f8184d40346f,PodSandboxId:3f36a2408aceaabeb96ec3f00d2d37bd40ce1860d24c57725e757501a1f5fbb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722214175872105607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d4aab67e7f7f6474899aba0076081c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219ee84cb547903968b7a45cad9827f7a37d3dbbbcb50dfd16d456392c1aea67,PodSandboxId:9987febd0df8ec4413f510c772d1a2221aef49aee25ae509fb39843517ed1f50,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:386
1cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722214175855725684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5ea8da4ad09fe4ac784bd8378d5702,},Annotations:map[string]string{io.kubernetes.container.hash: 2326aee3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ba32aabad2a7c3ecaeadb54b5c6c29a332c87a5d9a00cc327d2a74154f1dde,PodSandboxId:63690cadeb68bf946669f7f55646382345e48e4c54426ea3792ac417895d159a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c700
71dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722214175860649173,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362308c5f6f5ccd5eeb5a4c232a54105,},Annotations:map[string]string{io.kubernetes.container.hash: fe954fd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb14a7eb5f1001bbd63d7b21d415b3af85d03ab4f5282396b474724e9d69b206,PodSandboxId:0ea5272a1942103b034e147d6925aa1c232e9aa45c7390e06599c5d1c4fb4a2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856
f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722214175785411192,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58a344d0d12b0547d54b3f03ac2afd2e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90c3d8c0-8049-4133-9ece-2d4cd543d885 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8da8b71c903c7       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        6 seconds ago       Running             hello-world-app           0                   bc6ffcaf0af23       hello-world-app-6778b5fc9f-srwb4
	55f979ed0d59c       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   21a7602a79700       nginx
	da001d6bccbde       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   6ef848877b3de       busybox
	aa55d39ae5e29       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              patch                     0                   119cb273775b8       ingress-nginx-admission-patch-7ph7w
	8ff01203fa880       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   7501523e58085       ingress-nginx-admission-create-6vcl7
	fd8c616404fea       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        3 minutes ago       Running             metrics-server            0                   8c9063fe62de5       metrics-server-c59844bb4-5pktj
	1e58106e1d280       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   e10f3dcd91eed       storage-provisioner
	da92967e70eba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   0b021da77f062       coredns-7db6d8ff4d-sglhh
	ef538b61c48a8       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             4 minutes ago       Running             kube-proxy                0                   750fec69cfa87       kube-proxy-kvp86
	ebe4fbe2afb49       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             4 minutes ago       Running             kube-scheduler            0                   3f36a2408acea       kube-scheduler-addons-657805
	56ba32aabad2a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             4 minutes ago       Running             kube-apiserver            0                   63690cadeb68b       kube-apiserver-addons-657805
	219ee84cb5479       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   9987febd0df8e       etcd-addons-657805
	cb14a7eb5f100       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             4 minutes ago       Running             kube-controller-manager   0                   0ea5272a19421       kube-controller-manager-addons-657805
	
	
	==> coredns [da92967e70eba7d7e1043f0dc9ca2c2df5b5218f209a76240b62e3d6fac7526b] <==
	[INFO] 10.244.0.7:55876 - 59019 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000105277s
	[INFO] 10.244.0.7:58036 - 56772 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126617s
	[INFO] 10.244.0.7:58036 - 1986 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085045s
	[INFO] 10.244.0.7:51168 - 54477 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000139345s
	[INFO] 10.244.0.7:51168 - 52687 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079759s
	[INFO] 10.244.0.7:35664 - 19161 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000090404s
	[INFO] 10.244.0.7:35664 - 57048 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000091894s
	[INFO] 10.244.0.7:35265 - 64461 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000142386s
	[INFO] 10.244.0.7:35265 - 59336 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000074863s
	[INFO] 10.244.0.7:57121 - 61683 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000068608s
	[INFO] 10.244.0.7:57121 - 14833 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000080524s
	[INFO] 10.244.0.7:54405 - 19529 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00005349s
	[INFO] 10.244.0.7:54405 - 39240 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00024391s
	[INFO] 10.244.0.7:59019 - 63328 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000056187s
	[INFO] 10.244.0.7:59019 - 22558 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110906s
	[INFO] 10.244.0.22:45945 - 22370 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000486413s
	[INFO] 10.244.0.22:33955 - 7214 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000074675s
	[INFO] 10.244.0.22:36185 - 63378 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000269409s
	[INFO] 10.244.0.22:35608 - 62867 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155662s
	[INFO] 10.244.0.22:57123 - 37617 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109529s
	[INFO] 10.244.0.22:58981 - 21344 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127524s
	[INFO] 10.244.0.22:40430 - 12022 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001347922s
	[INFO] 10.244.0.22:47662 - 23434 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001088439s
	[INFO] 10.244.0.24:45684 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000273223s
	[INFO] 10.244.0.24:33956 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000183503s
	
	
	==> describe nodes <==
	Name:               addons-657805
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-657805
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=addons-657805
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T00_49_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-657805
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 00:49:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-657805
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 00:54:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 00:53:46 +0000   Mon, 29 Jul 2024 00:49:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 00:53:46 +0000   Mon, 29 Jul 2024 00:49:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 00:53:46 +0000   Mon, 29 Jul 2024 00:49:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 00:53:46 +0000   Mon, 29 Jul 2024 00:49:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    addons-657805
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e47ebefea744fb299de58a1d88e126a
	  System UUID:                1e47ebef-ea74-4fb2-99de-58a1d88e126a
	  Boot ID:                    b952f0ff-9332-441c-81d7-1e7f5d3c3cc6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	  default                     hello-world-app-6778b5fc9f-srwb4         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  kube-system                 coredns-7db6d8ff4d-sglhh                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m36s
	  kube-system                 etcd-addons-657805                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-apiserver-addons-657805             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-controller-manager-addons-657805    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-proxy-kvp86                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-scheduler-addons-657805             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 metrics-server-c59844bb4-5pktj           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m31s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m33s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m55s (x8 over 4m55s)  kubelet          Node addons-657805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m55s (x8 over 4m55s)  kubelet          Node addons-657805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m55s (x7 over 4m55s)  kubelet          Node addons-657805 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m49s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m49s                  kubelet          Node addons-657805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s                  kubelet          Node addons-657805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s                  kubelet          Node addons-657805 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m48s                  kubelet          Node addons-657805 status is now: NodeReady
	  Normal  RegisteredNode           4m37s                  node-controller  Node addons-657805 event: Registered Node addons-657805 in Controller
	
	
	==> dmesg <==
	[  +0.155269] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.021462] kauditd_printk_skb: 104 callbacks suppressed
	[Jul29 00:50] kauditd_printk_skb: 117 callbacks suppressed
	[  +6.716185] kauditd_printk_skb: 103 callbacks suppressed
	[ +22.326278] kauditd_printk_skb: 4 callbacks suppressed
	[ +20.601294] kauditd_printk_skb: 27 callbacks suppressed
	[Jul29 00:51] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.026049] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.273740] kauditd_printk_skb: 82 callbacks suppressed
	[  +5.816421] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.060110] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.005666] kauditd_printk_skb: 50 callbacks suppressed
	[ +23.974926] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.348244] kauditd_printk_skb: 4 callbacks suppressed
	[Jul29 00:52] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.398639] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.061088] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.129848] kauditd_printk_skb: 35 callbacks suppressed
	[Jul29 00:53] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.506236] kauditd_printk_skb: 45 callbacks suppressed
	[  +6.043604] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.012015] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.474495] kauditd_printk_skb: 16 callbacks suppressed
	[Jul29 00:54] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.038552] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [219ee84cb547903968b7a45cad9827f7a37d3dbbbcb50dfd16d456392c1aea67] <==
	{"level":"warn","ts":"2024-07-29T00:50:47.957836Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.223473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85382"}
	{"level":"info","ts":"2024-07-29T00:50:47.957879Z","caller":"traceutil/trace.go:171","msg":"trace[223413755] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:989; }","duration":"173.291667ms","start":"2024-07-29T00:50:47.784579Z","end":"2024-07-29T00:50:47.957871Z","steps":["trace[223413755] 'range keys from in-memory index tree'  (duration: 173.059669ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T00:50:54.080162Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.255432ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14071"}
	{"level":"info","ts":"2024-07-29T00:50:54.080274Z","caller":"traceutil/trace.go:171","msg":"trace[1809084110] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1008; }","duration":"171.445393ms","start":"2024-07-29T00:50:53.908805Z","end":"2024-07-29T00:50:54.080251Z","steps":["trace[1809084110] 'range keys from in-memory index tree'  (duration: 171.141827ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T00:50:54.08038Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.095073ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11161"}
	{"level":"info","ts":"2024-07-29T00:50:54.080406Z","caller":"traceutil/trace.go:171","msg":"trace[895416400] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1008; }","duration":"129.139702ms","start":"2024-07-29T00:50:53.951259Z","end":"2024-07-29T00:50:54.080398Z","steps":["trace[895416400] 'range keys from in-memory index tree'  (duration: 128.921314ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T00:50:54.080222Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.02907ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85439"}
	{"level":"info","ts":"2024-07-29T00:50:54.080452Z","caller":"traceutil/trace.go:171","msg":"trace[1483522574] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1008; }","duration":"295.286229ms","start":"2024-07-29T00:50:53.78516Z","end":"2024-07-29T00:50:54.080446Z","steps":["trace[1483522574] 'range keys from in-memory index tree'  (duration: 294.843109ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T00:51:06.677243Z","caller":"traceutil/trace.go:171","msg":"trace[1170433266] transaction","detail":"{read_only:false; response_revision:1078; number_of_response:1; }","duration":"211.742091ms","start":"2024-07-29T00:51:06.465482Z","end":"2024-07-29T00:51:06.677225Z","steps":["trace[1170433266] 'process raft request'  (duration: 211.574154ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T00:51:13.884671Z","caller":"traceutil/trace.go:171","msg":"trace[648145901] linearizableReadLoop","detail":"{readStateIndex:1172; appliedIndex:1171; }","duration":"102.102891ms","start":"2024-07-29T00:51:13.782553Z","end":"2024-07-29T00:51:13.884656Z","steps":["trace[648145901] 'read index received'  (duration: 101.951568ms)","trace[648145901] 'applied index is now lower than readState.Index'  (duration: 150.647µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T00:51:13.884887Z","caller":"traceutil/trace.go:171","msg":"trace[1463469719] transaction","detail":"{read_only:false; response_revision:1138; number_of_response:1; }","duration":"183.976845ms","start":"2024-07-29T00:51:13.700896Z","end":"2024-07-29T00:51:13.884872Z","steps":["trace[1463469719] 'process raft request'  (duration: 183.652776ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T00:51:13.885036Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.463151ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85581"}
	{"level":"info","ts":"2024-07-29T00:51:13.885103Z","caller":"traceutil/trace.go:171","msg":"trace[1965017160] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1138; }","duration":"102.565024ms","start":"2024-07-29T00:51:13.782529Z","end":"2024-07-29T00:51:13.885094Z","steps":["trace[1965017160] 'agreement among raft nodes before linearized reading'  (duration: 102.280834ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T00:51:21.923558Z","caller":"traceutil/trace.go:171","msg":"trace[985096048] transaction","detail":"{read_only:false; response_revision:1169; number_of_response:1; }","duration":"151.443065ms","start":"2024-07-29T00:51:21.772093Z","end":"2024-07-29T00:51:21.923536Z","steps":["trace[985096048] 'process raft request'  (duration: 150.646727ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T00:51:21.924458Z","caller":"traceutil/trace.go:171","msg":"trace[621982709] linearizableReadLoop","detail":"{readStateIndex:1205; appliedIndex:1204; }","duration":"142.542934ms","start":"2024-07-29T00:51:21.781902Z","end":"2024-07-29T00:51:21.924445Z","steps":["trace[621982709] 'read index received'  (duration: 140.14895ms)","trace[621982709] 'applied index is now lower than readState.Index'  (duration: 2.391752ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T00:51:21.924753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.836174ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85581"}
	{"level":"info","ts":"2024-07-29T00:51:21.925367Z","caller":"traceutil/trace.go:171","msg":"trace[408582026] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1169; }","duration":"143.47424ms","start":"2024-07-29T00:51:21.781881Z","end":"2024-07-29T00:51:21.925355Z","steps":["trace[408582026] 'agreement among raft nodes before linearized reading'  (duration: 142.689863ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T00:51:26.289861Z","caller":"traceutil/trace.go:171","msg":"trace[795199920] transaction","detail":"{read_only:false; response_revision:1199; number_of_response:1; }","duration":"337.382057ms","start":"2024-07-29T00:51:25.952459Z","end":"2024-07-29T00:51:26.289841Z","steps":["trace[795199920] 'process raft request'  (duration: 336.811052ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T00:51:26.290205Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T00:51:25.952444Z","time spent":"337.642819ms","remote":"127.0.0.1:39426","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1192 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-29T00:53:00.348223Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.85875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/nvidia-device-plugin-daemonset-q9787.17e688b00a6169eb\" ","response":"range_response_count:1 size:859"}
	{"level":"info","ts":"2024-07-29T00:53:00.348418Z","caller":"traceutil/trace.go:171","msg":"trace[774637368] range","detail":"{range_begin:/registry/events/kube-system/nvidia-device-plugin-daemonset-q9787.17e688b00a6169eb; range_end:; response_count:1; response_revision:1675; }","duration":"170.150272ms","start":"2024-07-29T00:53:00.178245Z","end":"2024-07-29T00:53:00.348395Z","steps":["trace[774637368] 'agreement among raft nodes before linearized reading'  (duration: 169.806733ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T00:53:00.348901Z","caller":"traceutil/trace.go:171","msg":"trace[1667996954] linearizableReadLoop","detail":"{readStateIndex:1741; appliedIndex:1740; }","duration":"169.707876ms","start":"2024-07-29T00:53:00.178276Z","end":"2024-07-29T00:53:00.347984Z","steps":["trace[1667996954] 'read index received'  (duration: 169.194164ms)","trace[1667996954] 'applied index is now lower than readState.Index'  (duration: 512.613µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T00:53:22.608961Z","caller":"traceutil/trace.go:171","msg":"trace[1554804269] transaction","detail":"{read_only:false; response_revision:1889; number_of_response:1; }","duration":"298.775975ms","start":"2024-07-29T00:53:22.310104Z","end":"2024-07-29T00:53:22.60888Z","steps":["trace[1554804269] 'process raft request'  (duration: 298.531695ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T00:53:53.263631Z","caller":"traceutil/trace.go:171","msg":"trace[1140891849] transaction","detail":"{read_only:false; response_revision:1985; number_of_response:1; }","duration":"161.993518ms","start":"2024-07-29T00:53:53.101621Z","end":"2024-07-29T00:53:53.263614Z","steps":["trace[1140891849] 'process raft request'  (duration: 161.654901ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T00:53:59.407457Z","caller":"traceutil/trace.go:171","msg":"trace[1269141747] transaction","detail":"{read_only:false; response_revision:1991; number_of_response:1; }","duration":"119.68115ms","start":"2024-07-29T00:53:59.28776Z","end":"2024-07-29T00:53:59.407441Z","steps":["trace[1269141747] 'process raft request'  (duration: 119.362093ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:54:30 up 5 min,  0 users,  load average: 0.61, 1.09, 0.57
	Linux addons-657805 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [56ba32aabad2a7c3ecaeadb54b5c6c29a332c87a5d9a00cc327d2a74154f1dde] <==
	E0729 00:51:40.644899       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.89.44:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.89.44:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.89.44:443: connect: connection refused
	E0729 00:51:40.649947       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.89.44:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.89.44:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.89.44:443: connect: connection refused
	I0729 00:51:40.761717       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0729 00:51:51.456750       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0729 00:51:52.481996       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0729 00:51:57.101858       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0729 00:51:57.297628       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.59.176"}
	E0729 00:52:31.022868       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0729 00:52:32.261614       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0729 00:53:09.386173       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 00:53:09.386211       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 00:53:09.411553       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 00:53:09.411653       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 00:53:09.442944       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 00:53:09.443731       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 00:53:09.464007       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 00:53:09.464107       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 00:53:09.489278       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 00:53:09.489627       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0729 00:53:10.443912       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0729 00:53:10.489776       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0729 00:53:10.507708       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0729 00:53:16.746637       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.11.83"}
	I0729 00:54:20.465525       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.142.52"}
	E0729 00:54:22.280393       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [cb14a7eb5f1001bbd63d7b21d415b3af85d03ab4f5282396b474724e9d69b206] <==
	I0729 00:53:29.410080       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="4.051µs"
	W0729 00:53:31.061418       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:53:31.061469       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0729 00:53:39.496802       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	W0729 00:53:44.849447       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:53:44.849507       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 00:53:46.918459       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:53:46.918492       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 00:53:48.127675       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:53:48.127760       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 00:53:52.218651       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:53:52.218693       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0729 00:54:20.329051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="50.569864ms"
	I0729 00:54:20.341817       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="12.702195ms"
	I0729 00:54:20.342214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="42.6µs"
	I0729 00:54:20.348113       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="34.476µs"
	I0729 00:54:22.177058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="5.079µs"
	I0729 00:54:22.180802       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0729 00:54:22.187219       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0729 00:54:23.539583       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="7.275024ms"
	I0729 00:54:23.539657       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="33.163µs"
	W0729 00:54:26.789974       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:54:26.790101       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 00:54:29.175585       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:54:29.175709       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [ef538b61c48a86bae81638d1752ecce8c820d316d178e6e52f8faf3a7d15e245] <==
	I0729 00:49:56.622720       1 server_linux.go:69] "Using iptables proxy"
	I0729 00:49:56.643764       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.18"]
	I0729 00:49:56.779490       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 00:49:56.779543       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 00:49:56.779561       1 server_linux.go:165] "Using iptables Proxier"
	I0729 00:49:56.783906       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 00:49:56.784134       1 server.go:872] "Version info" version="v1.30.3"
	I0729 00:49:56.784163       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 00:49:56.785944       1 config.go:192] "Starting service config controller"
	I0729 00:49:56.785976       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 00:49:56.786000       1 config.go:101] "Starting endpoint slice config controller"
	I0729 00:49:56.786004       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 00:49:56.786523       1 config.go:319] "Starting node config controller"
	I0729 00:49:56.786550       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 00:49:56.887399       1 shared_informer.go:320] Caches are synced for node config
	I0729 00:49:56.887441       1 shared_informer.go:320] Caches are synced for service config
	I0729 00:49:56.887461       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ebe4fbe2afb49ca9058feea4070393aa1fa31206b6877a61b6a9f8184d40346f] <==
	E0729 00:49:38.498005       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 00:49:38.497989       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 00:49:38.498081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 00:49:38.498225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 00:49:38.498221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 00:49:38.498275       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 00:49:39.434115       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 00:49:39.434160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 00:49:39.542681       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 00:49:39.542730       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 00:49:39.640735       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 00:49:39.640778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 00:49:39.644819       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 00:49:39.644861       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 00:49:39.644831       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 00:49:39.644882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 00:49:39.701594       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 00:49:39.701775       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 00:49:39.705765       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 00:49:39.705846       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 00:49:39.767292       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 00:49:39.767472       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 00:49:39.822818       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 00:49:39.822914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0729 00:49:42.392445       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 00:54:20 addons-657805 kubelet[1271]: I0729 00:54:20.323284    1271 memory_manager.go:354] "RemoveStaleState removing state" podUID="81fcaf36-69f1-449b-bf17-176ca5833aca" containerName="helm-test"
	Jul 29 00:54:20 addons-657805 kubelet[1271]: I0729 00:54:20.323401    1271 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ff6eb3-431f-4705-9f70-09fb802cccd1" containerName="tiller"
	Jul 29 00:54:20 addons-657805 kubelet[1271]: I0729 00:54:20.323523    1271 memory_manager.go:354] "RemoveStaleState removing state" podUID="37016442-9560-4608-811c-61cc9bfff166" containerName="headlamp"
	Jul 29 00:54:20 addons-657805 kubelet[1271]: I0729 00:54:20.427699    1271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rblc2\" (UniqueName: \"kubernetes.io/projected/3f8c5130-5429-4ba4-b0bc-d64604463eea-kube-api-access-rblc2\") pod \"hello-world-app-6778b5fc9f-srwb4\" (UID: \"3f8c5130-5429-4ba4-b0bc-d64604463eea\") " pod="default/hello-world-app-6778b5fc9f-srwb4"
	Jul 29 00:54:21 addons-657805 kubelet[1271]: I0729 00:54:21.439447    1271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hnqc\" (UniqueName: \"kubernetes.io/projected/a3d38178-b58f-4c20-aa2c-a333b13ba547-kube-api-access-2hnqc\") pod \"a3d38178-b58f-4c20-aa2c-a333b13ba547\" (UID: \"a3d38178-b58f-4c20-aa2c-a333b13ba547\") "
	Jul 29 00:54:21 addons-657805 kubelet[1271]: I0729 00:54:21.441595    1271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d38178-b58f-4c20-aa2c-a333b13ba547-kube-api-access-2hnqc" (OuterVolumeSpecName: "kube-api-access-2hnqc") pod "a3d38178-b58f-4c20-aa2c-a333b13ba547" (UID: "a3d38178-b58f-4c20-aa2c-a333b13ba547"). InnerVolumeSpecName "kube-api-access-2hnqc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 00:54:21 addons-657805 kubelet[1271]: I0729 00:54:21.499958    1271 scope.go:117] "RemoveContainer" containerID="14c6d1a400c920dbee4a6a83e8bd74af10a4a8b32323e93c8cc4e48ba7a0ecf5"
	Jul 29 00:54:21 addons-657805 kubelet[1271]: I0729 00:54:21.525602    1271 scope.go:117] "RemoveContainer" containerID="14c6d1a400c920dbee4a6a83e8bd74af10a4a8b32323e93c8cc4e48ba7a0ecf5"
	Jul 29 00:54:21 addons-657805 kubelet[1271]: E0729 00:54:21.526236    1271 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14c6d1a400c920dbee4a6a83e8bd74af10a4a8b32323e93c8cc4e48ba7a0ecf5\": container with ID starting with 14c6d1a400c920dbee4a6a83e8bd74af10a4a8b32323e93c8cc4e48ba7a0ecf5 not found: ID does not exist" containerID="14c6d1a400c920dbee4a6a83e8bd74af10a4a8b32323e93c8cc4e48ba7a0ecf5"
	Jul 29 00:54:21 addons-657805 kubelet[1271]: I0729 00:54:21.526283    1271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14c6d1a400c920dbee4a6a83e8bd74af10a4a8b32323e93c8cc4e48ba7a0ecf5"} err="failed to get container status \"14c6d1a400c920dbee4a6a83e8bd74af10a4a8b32323e93c8cc4e48ba7a0ecf5\": rpc error: code = NotFound desc = could not find container \"14c6d1a400c920dbee4a6a83e8bd74af10a4a8b32323e93c8cc4e48ba7a0ecf5\": container with ID starting with 14c6d1a400c920dbee4a6a83e8bd74af10a4a8b32323e93c8cc4e48ba7a0ecf5 not found: ID does not exist"
	Jul 29 00:54:21 addons-657805 kubelet[1271]: I0729 00:54:21.539679    1271 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2hnqc\" (UniqueName: \"kubernetes.io/projected/a3d38178-b58f-4c20-aa2c-a333b13ba547-kube-api-access-2hnqc\") on node \"addons-657805\" DevicePath \"\""
	Jul 29 00:54:23 addons-657805 kubelet[1271]: I0729 00:54:23.181933    1271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="360b1235-1dfd-404f-b7a9-de31a0df7101" path="/var/lib/kubelet/pods/360b1235-1dfd-404f-b7a9-de31a0df7101/volumes"
	Jul 29 00:54:23 addons-657805 kubelet[1271]: I0729 00:54:23.182651    1271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d38178-b58f-4c20-aa2c-a333b13ba547" path="/var/lib/kubelet/pods/a3d38178-b58f-4c20-aa2c-a333b13ba547/volumes"
	Jul 29 00:54:23 addons-657805 kubelet[1271]: I0729 00:54:23.183032    1271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba23da7f-5907-45b5-81a6-8c9c919a205f" path="/var/lib/kubelet/pods/ba23da7f-5907-45b5-81a6-8c9c919a205f/volumes"
	Jul 29 00:54:25 addons-657805 kubelet[1271]: I0729 00:54:25.469689    1271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l65gn\" (UniqueName: \"kubernetes.io/projected/88468d92-3a78-414c-a2bf-a04b6bc1c176-kube-api-access-l65gn\") pod \"88468d92-3a78-414c-a2bf-a04b6bc1c176\" (UID: \"88468d92-3a78-414c-a2bf-a04b6bc1c176\") "
	Jul 29 00:54:25 addons-657805 kubelet[1271]: I0729 00:54:25.469759    1271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88468d92-3a78-414c-a2bf-a04b6bc1c176-webhook-cert\") pod \"88468d92-3a78-414c-a2bf-a04b6bc1c176\" (UID: \"88468d92-3a78-414c-a2bf-a04b6bc1c176\") "
	Jul 29 00:54:25 addons-657805 kubelet[1271]: I0729 00:54:25.473074    1271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88468d92-3a78-414c-a2bf-a04b6bc1c176-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "88468d92-3a78-414c-a2bf-a04b6bc1c176" (UID: "88468d92-3a78-414c-a2bf-a04b6bc1c176"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 29 00:54:25 addons-657805 kubelet[1271]: I0729 00:54:25.473268    1271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88468d92-3a78-414c-a2bf-a04b6bc1c176-kube-api-access-l65gn" (OuterVolumeSpecName: "kube-api-access-l65gn") pod "88468d92-3a78-414c-a2bf-a04b6bc1c176" (UID: "88468d92-3a78-414c-a2bf-a04b6bc1c176"). InnerVolumeSpecName "kube-api-access-l65gn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 00:54:25 addons-657805 kubelet[1271]: I0729 00:54:25.531670    1271 scope.go:117] "RemoveContainer" containerID="a82f0a589f5e93521649752e248509c64743400454996e9884b7b6ed6bd24b60"
	Jul 29 00:54:25 addons-657805 kubelet[1271]: I0729 00:54:25.568892    1271 scope.go:117] "RemoveContainer" containerID="a82f0a589f5e93521649752e248509c64743400454996e9884b7b6ed6bd24b60"
	Jul 29 00:54:25 addons-657805 kubelet[1271]: E0729 00:54:25.569737    1271 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a82f0a589f5e93521649752e248509c64743400454996e9884b7b6ed6bd24b60\": container with ID starting with a82f0a589f5e93521649752e248509c64743400454996e9884b7b6ed6bd24b60 not found: ID does not exist" containerID="a82f0a589f5e93521649752e248509c64743400454996e9884b7b6ed6bd24b60"
	Jul 29 00:54:25 addons-657805 kubelet[1271]: I0729 00:54:25.569787    1271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a82f0a589f5e93521649752e248509c64743400454996e9884b7b6ed6bd24b60"} err="failed to get container status \"a82f0a589f5e93521649752e248509c64743400454996e9884b7b6ed6bd24b60\": rpc error: code = NotFound desc = could not find container \"a82f0a589f5e93521649752e248509c64743400454996e9884b7b6ed6bd24b60\": container with ID starting with a82f0a589f5e93521649752e248509c64743400454996e9884b7b6ed6bd24b60 not found: ID does not exist"
	Jul 29 00:54:25 addons-657805 kubelet[1271]: I0729 00:54:25.569896    1271 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-l65gn\" (UniqueName: \"kubernetes.io/projected/88468d92-3a78-414c-a2bf-a04b6bc1c176-kube-api-access-l65gn\") on node \"addons-657805\" DevicePath \"\""
	Jul 29 00:54:25 addons-657805 kubelet[1271]: I0729 00:54:25.569906    1271 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88468d92-3a78-414c-a2bf-a04b6bc1c176-webhook-cert\") on node \"addons-657805\" DevicePath \"\""
	Jul 29 00:54:27 addons-657805 kubelet[1271]: I0729 00:54:27.180522    1271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88468d92-3a78-414c-a2bf-a04b6bc1c176" path="/var/lib/kubelet/pods/88468d92-3a78-414c-a2bf-a04b6bc1c176/volumes"
	
	
	==> storage-provisioner [1e58106e1d28009e5317c0c8b0c0511dbc63cfc12df5326a4fc50e59342362f7] <==
	I0729 00:50:02.521606       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 00:50:02.558535       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 00:50:02.558604       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 00:50:02.586184       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 00:50:02.586687       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-657805_66e7d17a-5e06-4549-9e4b-393f6ba9cef3!
	I0729 00:50:02.587754       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3cd53403-dd90-4256-99b3-90c443eea919", APIVersion:"v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-657805_66e7d17a-5e06-4549-9e4b-393f6ba9cef3 became leader
	I0729 00:50:02.687077       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-657805_66e7d17a-5e06-4549-9e4b-393f6ba9cef3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-657805 -n addons-657805
helpers_test.go:261: (dbg) Run:  kubectl --context addons-657805 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (283.33s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.155681ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-5pktj" [f3d59e24-fa87-4a81-a526-dd3281cc933f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011326651s
addons_test.go:417: (dbg) Run:  kubectl --context addons-657805 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-657805 top pods -n kube-system: exit status 1 (74.771123ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-657805, age: 2m10.017072683s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-657805 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-657805 top pods -n kube-system: exit status 1 (66.188831ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sglhh, age: 2m0.00406908s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-657805 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-657805 top pods -n kube-system: exit status 1 (93.414698ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sglhh, age: 2m2.607120946s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-657805 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-657805 top pods -n kube-system: exit status 1 (66.41204ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sglhh, age: 2m6.693600696s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-657805 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-657805 top pods -n kube-system: exit status 1 (74.822975ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sglhh, age: 2m19.24520732s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-657805 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-657805 top pods -n kube-system: exit status 1 (63.001298ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sglhh, age: 2m36.117364519s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-657805 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-657805 top pods -n kube-system: exit status 1 (64.809504ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sglhh, age: 2m58.726248058s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-657805 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-657805 top pods -n kube-system: exit status 1 (78.299705ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sglhh, age: 3m26.8526403s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-657805 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-657805 top pods -n kube-system: exit status 1 (61.671401ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sglhh, age: 4m20.0382971s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-657805 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-657805 top pods -n kube-system: exit status 1 (63.420212ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sglhh, age: 5m1.729159148s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-657805 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-657805 top pods -n kube-system: exit status 1 (65.586528ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sglhh, age: 6m1.910806939s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-657805 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-657805 top pods -n kube-system: exit status 1 (60.018528ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sglhh, age: 6m32.573251153s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-657805 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-657805 -n addons-657805
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-657805 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-657805 logs -n 25: (1.303353814s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-933059                                                                     | download-only-933059 | jenkins | v1.33.1 | 29 Jul 24 00:48 UTC | 29 Jul 24 00:48 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-899353 | jenkins | v1.33.1 | 29 Jul 24 00:48 UTC |                     |
	|         | binary-mirror-899353                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44815                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-899353                                                                     | binary-mirror-899353 | jenkins | v1.33.1 | 29 Jul 24 00:48 UTC | 29 Jul 24 00:48 UTC |
	| addons  | disable dashboard -p                                                                        | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:48 UTC |                     |
	|         | addons-657805                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:48 UTC |                     |
	|         | addons-657805                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-657805 --wait=true                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:48 UTC | 29 Jul 24 00:51 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-657805 addons disable                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:51 UTC | 29 Jul 24 00:51 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:51 UTC | 29 Jul 24 00:51 UTC |
	|         | addons-657805                                                                               |                      |         |         |                     |                     |
	| ip      | addons-657805 ip                                                                            | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:52 UTC | 29 Jul 24 00:52 UTC |
	| addons  | addons-657805 addons disable                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:52 UTC | 29 Jul 24 00:52 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-657805 ssh curl -s                                                                   | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:52 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-657805 ssh cat                                                                       | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:52 UTC | 29 Jul 24 00:52 UTC |
	|         | /opt/local-path-provisioner/pvc-e4f965f3-bc18-4e6c-89fd-eee01e8cf9ee_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-657805 addons disable                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:52 UTC | 29 Jul 24 00:52 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-657805 addons                                                                        | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:53 UTC | 29 Jul 24 00:53 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-657805 addons disable                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:53 UTC | 29 Jul 24 00:53 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-657805 addons                                                                        | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:53 UTC | 29 Jul 24 00:53 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:53 UTC | 29 Jul 24 00:53 UTC |
	|         | addons-657805                                                                               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:53 UTC | 29 Jul 24 00:53 UTC |
	|         | -p addons-657805                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:53 UTC | 29 Jul 24 00:53 UTC |
	|         | -p addons-657805                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-657805 addons disable                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:53 UTC | 29 Jul 24 00:53 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-657805 addons disable                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:53 UTC | 29 Jul 24 00:53 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-657805 ip                                                                            | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:54 UTC | 29 Jul 24 00:54 UTC |
	| addons  | addons-657805 addons disable                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:54 UTC | 29 Jul 24 00:54 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-657805 addons disable                                                                | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:54 UTC | 29 Jul 24 00:54 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-657805 addons                                                                        | addons-657805        | jenkins | v1.33.1 | 29 Jul 24 00:56 UTC | 29 Jul 24 00:56 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 00:48:59
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 00:48:59.152350   17906 out.go:291] Setting OutFile to fd 1 ...
	I0729 00:48:59.152460   17906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 00:48:59.152470   17906 out.go:304] Setting ErrFile to fd 2...
	I0729 00:48:59.152475   17906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 00:48:59.152637   17906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 00:48:59.153192   17906 out.go:298] Setting JSON to false
	I0729 00:48:59.154024   17906 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1885,"bootTime":1722212254,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 00:48:59.154087   17906 start.go:139] virtualization: kvm guest
	I0729 00:48:59.156168   17906 out.go:177] * [addons-657805] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 00:48:59.157603   17906 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 00:48:59.157615   17906 notify.go:220] Checking for updates...
	I0729 00:48:59.160142   17906 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 00:48:59.161659   17906 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 00:48:59.162968   17906 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 00:48:59.164377   17906 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 00:48:59.165569   17906 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 00:48:59.167245   17906 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 00:48:59.197851   17906 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 00:48:59.199095   17906 start.go:297] selected driver: kvm2
	I0729 00:48:59.199113   17906 start.go:901] validating driver "kvm2" against <nil>
	I0729 00:48:59.199125   17906 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 00:48:59.199795   17906 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 00:48:59.199865   17906 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 00:48:59.214038   17906 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 00:48:59.214101   17906 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 00:48:59.214355   17906 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 00:48:59.214421   17906 cni.go:84] Creating CNI manager for ""
	I0729 00:48:59.214438   17906 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 00:48:59.214451   17906 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 00:48:59.214518   17906 start.go:340] cluster config:
	{Name:addons-657805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-657805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 00:48:59.214638   17906 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 00:48:59.216342   17906 out.go:177] * Starting "addons-657805" primary control-plane node in "addons-657805" cluster
	I0729 00:48:59.217454   17906 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 00:48:59.217489   17906 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 00:48:59.217498   17906 cache.go:56] Caching tarball of preloaded images
	I0729 00:48:59.217561   17906 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 00:48:59.217570   17906 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 00:48:59.217854   17906 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/config.json ...
	I0729 00:48:59.217873   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/config.json: {Name:mk09f93ef1170e1eddd5ac968b3e21a249e6a9b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:48:59.217991   17906 start.go:360] acquireMachinesLock for addons-657805: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 00:48:59.218032   17906 start.go:364] duration metric: took 28.728µs to acquireMachinesLock for "addons-657805"
	I0729 00:48:59.218060   17906 start.go:93] Provisioning new machine with config: &{Name:addons-657805 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-657805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 00:48:59.218118   17906 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 00:48:59.219791   17906 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 00:48:59.219924   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:48:59.219957   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:48:59.234255   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45305
	I0729 00:48:59.234658   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:48:59.235212   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:48:59.235226   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:48:59.235556   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:48:59.235748   17906 main.go:141] libmachine: (addons-657805) Calling .GetMachineName
	I0729 00:48:59.235885   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:48:59.236054   17906 start.go:159] libmachine.API.Create for "addons-657805" (driver="kvm2")
	I0729 00:48:59.236082   17906 client.go:168] LocalClient.Create starting
	I0729 00:48:59.236129   17906 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem
	I0729 00:48:59.632092   17906 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem
	I0729 00:48:59.811121   17906 main.go:141] libmachine: Running pre-create checks...
	I0729 00:48:59.811146   17906 main.go:141] libmachine: (addons-657805) Calling .PreCreateCheck
	I0729 00:48:59.811661   17906 main.go:141] libmachine: (addons-657805) Calling .GetConfigRaw
	I0729 00:48:59.812105   17906 main.go:141] libmachine: Creating machine...
	I0729 00:48:59.812123   17906 main.go:141] libmachine: (addons-657805) Calling .Create
	I0729 00:48:59.812323   17906 main.go:141] libmachine: (addons-657805) Creating KVM machine...
	I0729 00:48:59.813559   17906 main.go:141] libmachine: (addons-657805) DBG | found existing default KVM network
	I0729 00:48:59.814281   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:48:59.814152   17928 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d1f0}
	I0729 00:48:59.814337   17906 main.go:141] libmachine: (addons-657805) DBG | created network xml: 
	I0729 00:48:59.814360   17906 main.go:141] libmachine: (addons-657805) DBG | <network>
	I0729 00:48:59.814368   17906 main.go:141] libmachine: (addons-657805) DBG |   <name>mk-addons-657805</name>
	I0729 00:48:59.814375   17906 main.go:141] libmachine: (addons-657805) DBG |   <dns enable='no'/>
	I0729 00:48:59.814381   17906 main.go:141] libmachine: (addons-657805) DBG |   
	I0729 00:48:59.814390   17906 main.go:141] libmachine: (addons-657805) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 00:48:59.814399   17906 main.go:141] libmachine: (addons-657805) DBG |     <dhcp>
	I0729 00:48:59.814410   17906 main.go:141] libmachine: (addons-657805) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 00:48:59.814419   17906 main.go:141] libmachine: (addons-657805) DBG |     </dhcp>
	I0729 00:48:59.814432   17906 main.go:141] libmachine: (addons-657805) DBG |   </ip>
	I0729 00:48:59.814444   17906 main.go:141] libmachine: (addons-657805) DBG |   
	I0729 00:48:59.814453   17906 main.go:141] libmachine: (addons-657805) DBG | </network>
	I0729 00:48:59.814464   17906 main.go:141] libmachine: (addons-657805) DBG | 
	I0729 00:48:59.819834   17906 main.go:141] libmachine: (addons-657805) DBG | trying to create private KVM network mk-addons-657805 192.168.39.0/24...
	I0729 00:48:59.883114   17906 main.go:141] libmachine: (addons-657805) Setting up store path in /home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805 ...
	I0729 00:48:59.883147   17906 main.go:141] libmachine: (addons-657805) DBG | private KVM network mk-addons-657805 192.168.39.0/24 created
	I0729 00:48:59.883170   17906 main.go:141] libmachine: (addons-657805) Building disk image from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 00:48:59.883199   17906 main.go:141] libmachine: (addons-657805) Downloading /home/jenkins/minikube-integration/19312-9421/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 00:48:59.883221   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:48:59.883000   17928 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 00:49:00.141462   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:00.141310   17928 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa...
	I0729 00:49:00.220687   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:00.220589   17928 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/addons-657805.rawdisk...
	I0729 00:49:00.220713   17906 main.go:141] libmachine: (addons-657805) DBG | Writing magic tar header
	I0729 00:49:00.220724   17906 main.go:141] libmachine: (addons-657805) DBG | Writing SSH key tar header
	I0729 00:49:00.220795   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:00.220716   17928 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805 ...
	I0729 00:49:00.220861   17906 main.go:141] libmachine: (addons-657805) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805
	I0729 00:49:00.220880   17906 main.go:141] libmachine: (addons-657805) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805 (perms=drwx------)
	I0729 00:49:00.220896   17906 main.go:141] libmachine: (addons-657805) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines
	I0729 00:49:00.220914   17906 main.go:141] libmachine: (addons-657805) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 00:49:00.220927   17906 main.go:141] libmachine: (addons-657805) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421
	I0729 00:49:00.220940   17906 main.go:141] libmachine: (addons-657805) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 00:49:00.220951   17906 main.go:141] libmachine: (addons-657805) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines (perms=drwxr-xr-x)
	I0729 00:49:00.220963   17906 main.go:141] libmachine: (addons-657805) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube (perms=drwxr-xr-x)
	I0729 00:49:00.220976   17906 main.go:141] libmachine: (addons-657805) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421 (perms=drwxrwxr-x)
	I0729 00:49:00.220987   17906 main.go:141] libmachine: (addons-657805) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 00:49:00.220999   17906 main.go:141] libmachine: (addons-657805) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 00:49:00.221017   17906 main.go:141] libmachine: (addons-657805) Creating domain...
	I0729 00:49:00.221028   17906 main.go:141] libmachine: (addons-657805) DBG | Checking permissions on dir: /home/jenkins
	I0729 00:49:00.221038   17906 main.go:141] libmachine: (addons-657805) DBG | Checking permissions on dir: /home
	I0729 00:49:00.221048   17906 main.go:141] libmachine: (addons-657805) DBG | Skipping /home - not owner
	I0729 00:49:00.221940   17906 main.go:141] libmachine: (addons-657805) define libvirt domain using xml: 
	I0729 00:49:00.221962   17906 main.go:141] libmachine: (addons-657805) <domain type='kvm'>
	I0729 00:49:00.221971   17906 main.go:141] libmachine: (addons-657805)   <name>addons-657805</name>
	I0729 00:49:00.221977   17906 main.go:141] libmachine: (addons-657805)   <memory unit='MiB'>4000</memory>
	I0729 00:49:00.221985   17906 main.go:141] libmachine: (addons-657805)   <vcpu>2</vcpu>
	I0729 00:49:00.221991   17906 main.go:141] libmachine: (addons-657805)   <features>
	I0729 00:49:00.222000   17906 main.go:141] libmachine: (addons-657805)     <acpi/>
	I0729 00:49:00.222011   17906 main.go:141] libmachine: (addons-657805)     <apic/>
	I0729 00:49:00.222020   17906 main.go:141] libmachine: (addons-657805)     <pae/>
	I0729 00:49:00.222030   17906 main.go:141] libmachine: (addons-657805)     
	I0729 00:49:00.222038   17906 main.go:141] libmachine: (addons-657805)   </features>
	I0729 00:49:00.222046   17906 main.go:141] libmachine: (addons-657805)   <cpu mode='host-passthrough'>
	I0729 00:49:00.222058   17906 main.go:141] libmachine: (addons-657805)   
	I0729 00:49:00.222068   17906 main.go:141] libmachine: (addons-657805)   </cpu>
	I0729 00:49:00.222080   17906 main.go:141] libmachine: (addons-657805)   <os>
	I0729 00:49:00.222091   17906 main.go:141] libmachine: (addons-657805)     <type>hvm</type>
	I0729 00:49:00.222102   17906 main.go:141] libmachine: (addons-657805)     <boot dev='cdrom'/>
	I0729 00:49:00.222110   17906 main.go:141] libmachine: (addons-657805)     <boot dev='hd'/>
	I0729 00:49:00.222139   17906 main.go:141] libmachine: (addons-657805)     <bootmenu enable='no'/>
	I0729 00:49:00.222162   17906 main.go:141] libmachine: (addons-657805)   </os>
	I0729 00:49:00.222171   17906 main.go:141] libmachine: (addons-657805)   <devices>
	I0729 00:49:00.222203   17906 main.go:141] libmachine: (addons-657805)     <disk type='file' device='cdrom'>
	I0729 00:49:00.222223   17906 main.go:141] libmachine: (addons-657805)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/boot2docker.iso'/>
	I0729 00:49:00.222230   17906 main.go:141] libmachine: (addons-657805)       <target dev='hdc' bus='scsi'/>
	I0729 00:49:00.222240   17906 main.go:141] libmachine: (addons-657805)       <readonly/>
	I0729 00:49:00.222246   17906 main.go:141] libmachine: (addons-657805)     </disk>
	I0729 00:49:00.222258   17906 main.go:141] libmachine: (addons-657805)     <disk type='file' device='disk'>
	I0729 00:49:00.222282   17906 main.go:141] libmachine: (addons-657805)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 00:49:00.222298   17906 main.go:141] libmachine: (addons-657805)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/addons-657805.rawdisk'/>
	I0729 00:49:00.222309   17906 main.go:141] libmachine: (addons-657805)       <target dev='hda' bus='virtio'/>
	I0729 00:49:00.222316   17906 main.go:141] libmachine: (addons-657805)     </disk>
	I0729 00:49:00.222325   17906 main.go:141] libmachine: (addons-657805)     <interface type='network'>
	I0729 00:49:00.222331   17906 main.go:141] libmachine: (addons-657805)       <source network='mk-addons-657805'/>
	I0729 00:49:00.222337   17906 main.go:141] libmachine: (addons-657805)       <model type='virtio'/>
	I0729 00:49:00.222356   17906 main.go:141] libmachine: (addons-657805)     </interface>
	I0729 00:49:00.222376   17906 main.go:141] libmachine: (addons-657805)     <interface type='network'>
	I0729 00:49:00.222395   17906 main.go:141] libmachine: (addons-657805)       <source network='default'/>
	I0729 00:49:00.222413   17906 main.go:141] libmachine: (addons-657805)       <model type='virtio'/>
	I0729 00:49:00.222425   17906 main.go:141] libmachine: (addons-657805)     </interface>
	I0729 00:49:00.222436   17906 main.go:141] libmachine: (addons-657805)     <serial type='pty'>
	I0729 00:49:00.222448   17906 main.go:141] libmachine: (addons-657805)       <target port='0'/>
	I0729 00:49:00.222457   17906 main.go:141] libmachine: (addons-657805)     </serial>
	I0729 00:49:00.222467   17906 main.go:141] libmachine: (addons-657805)     <console type='pty'>
	I0729 00:49:00.222484   17906 main.go:141] libmachine: (addons-657805)       <target type='serial' port='0'/>
	I0729 00:49:00.222496   17906 main.go:141] libmachine: (addons-657805)     </console>
	I0729 00:49:00.222510   17906 main.go:141] libmachine: (addons-657805)     <rng model='virtio'>
	I0729 00:49:00.222533   17906 main.go:141] libmachine: (addons-657805)       <backend model='random'>/dev/random</backend>
	I0729 00:49:00.222549   17906 main.go:141] libmachine: (addons-657805)     </rng>
	I0729 00:49:00.222556   17906 main.go:141] libmachine: (addons-657805)     
	I0729 00:49:00.222564   17906 main.go:141] libmachine: (addons-657805)     
	I0729 00:49:00.222570   17906 main.go:141] libmachine: (addons-657805)   </devices>
	I0729 00:49:00.222580   17906 main.go:141] libmachine: (addons-657805) </domain>
	I0729 00:49:00.222591   17906 main.go:141] libmachine: (addons-657805) 
	I0729 00:49:00.228697   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:16:b4:f4 in network default
	I0729 00:49:00.229271   17906 main.go:141] libmachine: (addons-657805) Ensuring networks are active...
	I0729 00:49:00.229297   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:00.229989   17906 main.go:141] libmachine: (addons-657805) Ensuring network default is active
	I0729 00:49:00.230294   17906 main.go:141] libmachine: (addons-657805) Ensuring network mk-addons-657805 is active
	I0729 00:49:00.230780   17906 main.go:141] libmachine: (addons-657805) Getting domain xml...
	I0729 00:49:00.231456   17906 main.go:141] libmachine: (addons-657805) Creating domain...
	I0729 00:49:01.614082   17906 main.go:141] libmachine: (addons-657805) Waiting to get IP...
	I0729 00:49:01.615080   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:01.615445   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:01.615460   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:01.615429   17928 retry.go:31] will retry after 204.454408ms: waiting for machine to come up
	I0729 00:49:01.821896   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:01.822406   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:01.822429   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:01.822345   17928 retry.go:31] will retry after 340.902268ms: waiting for machine to come up
	I0729 00:49:02.165027   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:02.165450   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:02.165469   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:02.165426   17928 retry.go:31] will retry after 481.394629ms: waiting for machine to come up
	I0729 00:49:02.648032   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:02.648454   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:02.648483   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:02.648404   17928 retry.go:31] will retry after 440.65689ms: waiting for machine to come up
	I0729 00:49:03.091046   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:03.091475   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:03.091515   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:03.091415   17928 retry.go:31] will retry after 718.084669ms: waiting for machine to come up
	I0729 00:49:03.811506   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:03.811896   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:03.811933   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:03.811879   17928 retry.go:31] will retry after 711.527044ms: waiting for machine to come up
	I0729 00:49:04.525378   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:04.525939   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:04.526011   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:04.525864   17928 retry.go:31] will retry after 826.675486ms: waiting for machine to come up
	I0729 00:49:05.354082   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:05.354658   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:05.354685   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:05.354613   17928 retry.go:31] will retry after 1.397827758s: waiting for machine to come up
	I0729 00:49:06.753870   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:06.754272   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:06.754298   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:06.754220   17928 retry.go:31] will retry after 1.512959505s: waiting for machine to come up
	I0729 00:49:08.268435   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:08.268913   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:08.268939   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:08.268811   17928 retry.go:31] will retry after 1.714052035s: waiting for machine to come up
	I0729 00:49:09.985035   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:09.985427   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:09.985460   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:09.985385   17928 retry.go:31] will retry after 2.887581395s: waiting for machine to come up
	I0729 00:49:12.876427   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:12.876828   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:12.876853   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:12.876783   17928 retry.go:31] will retry after 3.107647028s: waiting for machine to come up
	I0729 00:49:15.986422   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:15.986834   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:15.986860   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:15.986796   17928 retry.go:31] will retry after 2.779081026s: waiting for machine to come up
	I0729 00:49:18.768270   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:18.768680   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find current IP address of domain addons-657805 in network mk-addons-657805
	I0729 00:49:18.768702   17906 main.go:141] libmachine: (addons-657805) DBG | I0729 00:49:18.768643   17928 retry.go:31] will retry after 4.387003412s: waiting for machine to come up
	I0729 00:49:23.160029   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.160691   17906 main.go:141] libmachine: (addons-657805) Found IP for machine: 192.168.39.18
	I0729 00:49:23.160715   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has current primary IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.160722   17906 main.go:141] libmachine: (addons-657805) Reserving static IP address...
	I0729 00:49:23.161246   17906 main.go:141] libmachine: (addons-657805) DBG | unable to find host DHCP lease matching {name: "addons-657805", mac: "52:54:00:fe:86:06", ip: "192.168.39.18"} in network mk-addons-657805
	I0729 00:49:23.230979   17906 main.go:141] libmachine: (addons-657805) DBG | Getting to WaitForSSH function...
	I0729 00:49:23.231009   17906 main.go:141] libmachine: (addons-657805) Reserved static IP address: 192.168.39.18
	I0729 00:49:23.231021   17906 main.go:141] libmachine: (addons-657805) Waiting for SSH to be available...
	I0729 00:49:23.233566   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.233930   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:23.233956   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.234083   17906 main.go:141] libmachine: (addons-657805) DBG | Using SSH client type: external
	I0729 00:49:23.234139   17906 main.go:141] libmachine: (addons-657805) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa (-rw-------)
	I0729 00:49:23.234541   17906 main.go:141] libmachine: (addons-657805) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 00:49:23.234568   17906 main.go:141] libmachine: (addons-657805) DBG | About to run SSH command:
	I0729 00:49:23.234583   17906 main.go:141] libmachine: (addons-657805) DBG | exit 0
	I0729 00:49:23.371272   17906 main.go:141] libmachine: (addons-657805) DBG | SSH cmd err, output: <nil>: 
	I0729 00:49:23.371578   17906 main.go:141] libmachine: (addons-657805) KVM machine creation complete!
	I0729 00:49:23.371797   17906 main.go:141] libmachine: (addons-657805) Calling .GetConfigRaw
	I0729 00:49:23.372321   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:23.372497   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:23.372640   17906 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 00:49:23.372652   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:23.374001   17906 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 00:49:23.374018   17906 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 00:49:23.374025   17906 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 00:49:23.374032   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:23.376220   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.376562   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:23.376592   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.376719   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:23.376882   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:23.377036   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:23.377172   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:23.377367   17906 main.go:141] libmachine: Using SSH client type: native
	I0729 00:49:23.377597   17906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0729 00:49:23.377615   17906 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 00:49:23.486200   17906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 00:49:23.486225   17906 main.go:141] libmachine: Detecting the provisioner...
	I0729 00:49:23.486234   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:23.489073   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.489508   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:23.489535   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.489650   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:23.489874   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:23.490069   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:23.490241   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:23.490429   17906 main.go:141] libmachine: Using SSH client type: native
	I0729 00:49:23.490638   17906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0729 00:49:23.490651   17906 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 00:49:23.603866   17906 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 00:49:23.603929   17906 main.go:141] libmachine: found compatible host: buildroot
	I0729 00:49:23.603936   17906 main.go:141] libmachine: Provisioning with buildroot...
	I0729 00:49:23.603942   17906 main.go:141] libmachine: (addons-657805) Calling .GetMachineName
	I0729 00:49:23.604177   17906 buildroot.go:166] provisioning hostname "addons-657805"
	I0729 00:49:23.604197   17906 main.go:141] libmachine: (addons-657805) Calling .GetMachineName
	I0729 00:49:23.604369   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:23.606966   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.607381   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:23.607407   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.607588   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:23.607783   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:23.607957   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:23.608099   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:23.608244   17906 main.go:141] libmachine: Using SSH client type: native
	I0729 00:49:23.608403   17906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0729 00:49:23.608415   17906 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-657805 && echo "addons-657805" | sudo tee /etc/hostname
	I0729 00:49:23.733174   17906 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-657805
	
	I0729 00:49:23.733201   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:23.736078   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.736434   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:23.736460   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.736634   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:23.736815   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:23.736944   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:23.737049   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:23.737191   17906 main.go:141] libmachine: Using SSH client type: native
	I0729 00:49:23.737342   17906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0729 00:49:23.737356   17906 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-657805' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-657805/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-657805' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 00:49:23.855919   17906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 00:49:23.855948   17906 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 00:49:23.855994   17906 buildroot.go:174] setting up certificates
	I0729 00:49:23.856007   17906 provision.go:84] configureAuth start
	I0729 00:49:23.856028   17906 main.go:141] libmachine: (addons-657805) Calling .GetMachineName
	I0729 00:49:23.856319   17906 main.go:141] libmachine: (addons-657805) Calling .GetIP
	I0729 00:49:23.858920   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.859298   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:23.859331   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.859461   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:23.861997   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.862294   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:23.862317   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:23.862441   17906 provision.go:143] copyHostCerts
	I0729 00:49:23.862505   17906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 00:49:23.862625   17906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 00:49:23.862717   17906 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 00:49:23.862764   17906 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.addons-657805 san=[127.0.0.1 192.168.39.18 addons-657805 localhost minikube]
	I0729 00:49:24.197977   17906 provision.go:177] copyRemoteCerts
	I0729 00:49:24.198036   17906 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 00:49:24.198058   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:24.200828   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.201280   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.201317   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.201467   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:24.201659   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:24.201852   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:24.201988   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:24.289235   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 00:49:24.313074   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 00:49:24.336065   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 00:49:24.359513   17906 provision.go:87] duration metric: took 503.489652ms to configureAuth
	I0729 00:49:24.359541   17906 buildroot.go:189] setting minikube options for container-runtime
	I0729 00:49:24.359735   17906 config.go:182] Loaded profile config "addons-657805": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 00:49:24.359821   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:24.362494   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.362859   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.362890   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.363111   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:24.363302   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:24.363454   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:24.363600   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:24.363723   17906 main.go:141] libmachine: Using SSH client type: native
	I0729 00:49:24.363882   17906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0729 00:49:24.363896   17906 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 00:49:24.630050   17906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 00:49:24.630075   17906 main.go:141] libmachine: Checking connection to Docker...
	I0729 00:49:24.630082   17906 main.go:141] libmachine: (addons-657805) Calling .GetURL
	I0729 00:49:24.631396   17906 main.go:141] libmachine: (addons-657805) DBG | Using libvirt version 6000000
	I0729 00:49:24.633293   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.633692   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.633714   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.633840   17906 main.go:141] libmachine: Docker is up and running!
	I0729 00:49:24.633856   17906 main.go:141] libmachine: Reticulating splines...
	I0729 00:49:24.633862   17906 client.go:171] duration metric: took 25.397773855s to LocalClient.Create
	I0729 00:49:24.633882   17906 start.go:167] duration metric: took 25.397829972s to libmachine.API.Create "addons-657805"
	I0729 00:49:24.633891   17906 start.go:293] postStartSetup for "addons-657805" (driver="kvm2")
	I0729 00:49:24.633900   17906 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 00:49:24.633916   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:24.634166   17906 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 00:49:24.634191   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:24.636168   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.636499   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.636531   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.636629   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:24.636802   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:24.636920   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:24.637064   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:24.720699   17906 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 00:49:24.724864   17906 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 00:49:24.724891   17906 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 00:49:24.724966   17906 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 00:49:24.724991   17906 start.go:296] duration metric: took 91.094902ms for postStartSetup
	I0729 00:49:24.725040   17906 main.go:141] libmachine: (addons-657805) Calling .GetConfigRaw
	I0729 00:49:24.725618   17906 main.go:141] libmachine: (addons-657805) Calling .GetIP
	I0729 00:49:24.728043   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.728399   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.728422   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.728645   17906 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/config.json ...
	I0729 00:49:24.728849   17906 start.go:128] duration metric: took 25.510722443s to createHost
	I0729 00:49:24.728871   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:24.731183   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.731553   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.731581   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.731720   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:24.731887   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:24.732043   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:24.732170   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:24.732322   17906 main.go:141] libmachine: Using SSH client type: native
	I0729 00:49:24.732474   17906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0729 00:49:24.732484   17906 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 00:49:24.843754   17906 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722214164.822536355
	
	I0729 00:49:24.843780   17906 fix.go:216] guest clock: 1722214164.822536355
	I0729 00:49:24.843787   17906 fix.go:229] Guest: 2024-07-29 00:49:24.822536355 +0000 UTC Remote: 2024-07-29 00:49:24.728860946 +0000 UTC m=+25.609017205 (delta=93.675409ms)
	I0729 00:49:24.843826   17906 fix.go:200] guest clock delta is within tolerance: 93.675409ms
	I0729 00:49:24.843832   17906 start.go:83] releasing machines lock for "addons-657805", held for 25.625791047s
	I0729 00:49:24.843868   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:24.844112   17906 main.go:141] libmachine: (addons-657805) Calling .GetIP
	I0729 00:49:24.846571   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.846886   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.846904   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.847114   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:24.847602   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:24.847782   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:24.847897   17906 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 00:49:24.847952   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:24.847983   17906 ssh_runner.go:195] Run: cat /version.json
	I0729 00:49:24.848007   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:24.850454   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.850653   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.850750   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.850776   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.851010   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:24.851030   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:24.851098   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:24.851279   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:24.851400   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:24.851471   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:24.851527   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:24.851659   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:24.851742   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:24.851881   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:24.932358   17906 ssh_runner.go:195] Run: systemctl --version
	I0729 00:49:24.956334   17906 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 00:49:25.109846   17906 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 00:49:25.115932   17906 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 00:49:25.116005   17906 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 00:49:25.133099   17906 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 00:49:25.133123   17906 start.go:495] detecting cgroup driver to use...
	I0729 00:49:25.133186   17906 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 00:49:25.150111   17906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 00:49:25.164090   17906 docker.go:217] disabling cri-docker service (if available) ...
	I0729 00:49:25.164151   17906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 00:49:25.177959   17906 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 00:49:25.191332   17906 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 00:49:25.309961   17906 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 00:49:25.465122   17906 docker.go:233] disabling docker service ...
	I0729 00:49:25.465185   17906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 00:49:25.479885   17906 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 00:49:25.492735   17906 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 00:49:25.630654   17906 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 00:49:25.752102   17906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 00:49:25.766123   17906 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 00:49:25.784482   17906 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 00:49:25.784543   17906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 00:49:25.794764   17906 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 00:49:25.794833   17906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 00:49:25.805416   17906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 00:49:25.815439   17906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 00:49:25.825390   17906 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 00:49:25.835475   17906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 00:49:25.846720   17906 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 00:49:25.863829   17906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 00:49:25.874313   17906 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 00:49:25.884419   17906 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 00:49:25.884476   17906 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 00:49:25.897228   17906 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 00:49:25.906942   17906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 00:49:26.037142   17906 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 00:49:26.175837   17906 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 00:49:26.175931   17906 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 00:49:26.180466   17906 start.go:563] Will wait 60s for crictl version
	I0729 00:49:26.180520   17906 ssh_runner.go:195] Run: which crictl
	I0729 00:49:26.184353   17906 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 00:49:26.221927   17906 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 00:49:26.222028   17906 ssh_runner.go:195] Run: crio --version
	I0729 00:49:26.248457   17906 ssh_runner.go:195] Run: crio --version
	I0729 00:49:26.276634   17906 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 00:49:26.278041   17906 main.go:141] libmachine: (addons-657805) Calling .GetIP
	I0729 00:49:26.280495   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:26.280824   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:26.280849   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:26.281038   17906 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 00:49:26.285129   17906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 00:49:26.297728   17906 kubeadm.go:883] updating cluster {Name:addons-657805 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-657805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 00:49:26.297823   17906 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 00:49:26.297869   17906 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 00:49:26.330169   17906 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 00:49:26.330236   17906 ssh_runner.go:195] Run: which lz4
	I0729 00:49:26.334003   17906 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 00:49:26.338001   17906 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 00:49:26.338030   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 00:49:27.698320   17906 crio.go:462] duration metric: took 1.364336648s to copy over tarball
	I0729 00:49:27.698400   17906 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 00:49:29.980924   17906 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.282492078s)
	I0729 00:49:29.980957   17906 crio.go:469] duration metric: took 2.282605625s to extract the tarball
	I0729 00:49:29.980967   17906 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 00:49:30.018521   17906 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 00:49:30.061246   17906 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 00:49:30.061268   17906 cache_images.go:84] Images are preloaded, skipping loading
	I0729 00:49:30.061275   17906 kubeadm.go:934] updating node { 192.168.39.18 8443 v1.30.3 crio true true} ...
	I0729 00:49:30.061367   17906 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-657805 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-657805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 00:49:30.061426   17906 ssh_runner.go:195] Run: crio config
	I0729 00:49:30.116253   17906 cni.go:84] Creating CNI manager for ""
	I0729 00:49:30.116282   17906 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 00:49:30.116297   17906 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 00:49:30.116322   17906 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.18 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-657805 NodeName:addons-657805 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 00:49:30.116587   17906 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-657805"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 00:49:30.116694   17906 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 00:49:30.126292   17906 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 00:49:30.126351   17906 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 00:49:30.135463   17906 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 00:49:30.153494   17906 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 00:49:30.171708   17906 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0729 00:49:30.188677   17906 ssh_runner.go:195] Run: grep 192.168.39.18	control-plane.minikube.internal$ /etc/hosts
	I0729 00:49:30.192597   17906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 00:49:30.204265   17906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 00:49:30.323804   17906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 00:49:30.340408   17906 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805 for IP: 192.168.39.18
	I0729 00:49:30.340435   17906 certs.go:194] generating shared ca certs ...
	I0729 00:49:30.340454   17906 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:30.340617   17906 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 00:49:30.480278   17906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt ...
	I0729 00:49:30.480309   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt: {Name:mk8fad2e722cf917c9f34cecde4889e198331a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:30.480479   17906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key ...
	I0729 00:49:30.480489   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key: {Name:mk2f62da53b8d736f082b80a4ee556be190bf299 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:30.480557   17906 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 00:49:30.648740   17906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt ...
	I0729 00:49:30.648766   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt: {Name:mk47a5e124a0b1e459d544e63af797aed9fc919c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:30.648915   17906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key ...
	I0729 00:49:30.648925   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key: {Name:mke641a7096605541c4c9bff5414852198e2f104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:30.648987   17906 certs.go:256] generating profile certs ...
	I0729 00:49:30.649040   17906 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.key
	I0729 00:49:30.649054   17906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt with IP's: []
	I0729 00:49:30.886343   17906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt ...
	I0729 00:49:30.886370   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: {Name:mkf6b9c9729eabd73c3157348dae13e531b4bde5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:30.886526   17906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.key ...
	I0729 00:49:30.886535   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.key: {Name:mkbaded4f4aa28f2843e7e83c66b94c0a6e0a24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:30.886605   17906 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.key.8590c7ba
	I0729 00:49:30.886622   17906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.crt.8590c7ba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.18]
	I0729 00:49:31.040860   17906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.crt.8590c7ba ...
	I0729 00:49:31.040884   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.crt.8590c7ba: {Name:mk26cee3f94a04e79d6ee1fb9d24deea9fa1f918 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:31.041026   17906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.key.8590c7ba ...
	I0729 00:49:31.041039   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.key.8590c7ba: {Name:mka9a2fc29885d70db24d3c0b548df291093ac2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:31.041114   17906 certs.go:381] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.crt.8590c7ba -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.crt
	I0729 00:49:31.041184   17906 certs.go:385] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.key.8590c7ba -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.key
	I0729 00:49:31.041227   17906 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/proxy-client.key
	I0729 00:49:31.041243   17906 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/proxy-client.crt with IP's: []
	I0729 00:49:31.268569   17906 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/proxy-client.crt ...
	I0729 00:49:31.268595   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/proxy-client.crt: {Name:mk7051e6b608fd5e24e32d0aa45888104a2365ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:31.268766   17906 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/proxy-client.key ...
	I0729 00:49:31.268782   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/proxy-client.key: {Name:mk6bd729fae7e838d0eb4a8d5fd3ab3258a5b5fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:31.268992   17906 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 00:49:31.269032   17906 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 00:49:31.269068   17906 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 00:49:31.269096   17906 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 00:49:31.269691   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 00:49:31.298626   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 00:49:31.323384   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 00:49:31.347053   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 00:49:31.371926   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 00:49:31.396438   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 00:49:31.423143   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 00:49:31.449656   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 00:49:31.476261   17906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 00:49:31.499889   17906 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 00:49:31.516583   17906 ssh_runner.go:195] Run: openssl version
	I0729 00:49:31.522273   17906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 00:49:31.533141   17906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 00:49:31.537442   17906 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 00:49:31.537493   17906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 00:49:31.543254   17906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 00:49:31.553591   17906 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 00:49:31.557414   17906 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 00:49:31.557463   17906 kubeadm.go:392] StartCluster: {Name:addons-657805 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-657805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 00:49:31.557567   17906 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 00:49:31.557605   17906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 00:49:31.596168   17906 cri.go:89] found id: ""
	I0729 00:49:31.596248   17906 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 00:49:31.605941   17906 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 00:49:31.615106   17906 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 00:49:31.624105   17906 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 00:49:31.624125   17906 kubeadm.go:157] found existing configuration files:
	
	I0729 00:49:31.624167   17906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 00:49:31.632727   17906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 00:49:31.632781   17906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 00:49:31.641729   17906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 00:49:31.650252   17906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 00:49:31.650314   17906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 00:49:31.659385   17906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 00:49:31.668349   17906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 00:49:31.668412   17906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 00:49:31.677268   17906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 00:49:31.685973   17906 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 00:49:31.686032   17906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 00:49:31.695004   17906 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 00:49:31.751842   17906 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 00:49:31.751909   17906 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 00:49:31.892868   17906 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 00:49:31.892999   17906 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 00:49:31.893140   17906 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 00:49:32.088265   17906 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 00:49:32.243869   17906 out.go:204]   - Generating certificates and keys ...
	I0729 00:49:32.243999   17906 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 00:49:32.244124   17906 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 00:49:32.251967   17906 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 00:49:32.462399   17906 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 00:49:32.533893   17906 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 00:49:32.649445   17906 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 00:49:32.763595   17906 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 00:49:32.763771   17906 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-657805 localhost] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0729 00:49:32.934533   17906 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 00:49:32.934678   17906 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-657805 localhost] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0729 00:49:33.089919   17906 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 00:49:33.160772   17906 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 00:49:33.361029   17906 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 00:49:33.361193   17906 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 00:49:33.476473   17906 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 00:49:33.789943   17906 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 00:49:33.965249   17906 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 00:49:34.140954   17906 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 00:49:34.269185   17906 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 00:49:34.269725   17906 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 00:49:34.273726   17906 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 00:49:34.407423   17906 out.go:204]   - Booting up control plane ...
	I0729 00:49:34.407587   17906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 00:49:34.407694   17906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 00:49:34.407799   17906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 00:49:34.407947   17906 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 00:49:34.408078   17906 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 00:49:34.408133   17906 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 00:49:34.431217   17906 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 00:49:34.431326   17906 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 00:49:35.431700   17906 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001409251s
	I0729 00:49:35.431796   17906 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 00:49:40.432783   17906 kubeadm.go:310] [api-check] The API server is healthy after 5.002025846s
	I0729 00:49:40.443801   17906 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 00:49:40.460370   17906 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 00:49:40.490707   17906 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 00:49:40.490930   17906 kubeadm.go:310] [mark-control-plane] Marking the node addons-657805 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 00:49:40.509553   17906 kubeadm.go:310] [bootstrap-token] Using token: 4tz30c.7n1hf4yodd1tj9r8
	I0729 00:49:40.511010   17906 out.go:204]   - Configuring RBAC rules ...
	I0729 00:49:40.511158   17906 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 00:49:40.528355   17906 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 00:49:40.541042   17906 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 00:49:40.546586   17906 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 00:49:40.551245   17906 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 00:49:40.555682   17906 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 00:49:40.837498   17906 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 00:49:41.278085   17906 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 00:49:41.837186   17906 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 00:49:41.838141   17906 kubeadm.go:310] 
	I0729 00:49:41.838233   17906 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 00:49:41.838251   17906 kubeadm.go:310] 
	I0729 00:49:41.838340   17906 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 00:49:41.838350   17906 kubeadm.go:310] 
	I0729 00:49:41.838394   17906 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 00:49:41.838473   17906 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 00:49:41.838545   17906 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 00:49:41.838554   17906 kubeadm.go:310] 
	I0729 00:49:41.838626   17906 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 00:49:41.838653   17906 kubeadm.go:310] 
	I0729 00:49:41.838738   17906 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 00:49:41.838748   17906 kubeadm.go:310] 
	I0729 00:49:41.838812   17906 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 00:49:41.838911   17906 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 00:49:41.839004   17906 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 00:49:41.839014   17906 kubeadm.go:310] 
	I0729 00:49:41.839158   17906 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 00:49:41.839237   17906 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 00:49:41.839244   17906 kubeadm.go:310] 
	I0729 00:49:41.839311   17906 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4tz30c.7n1hf4yodd1tj9r8 \
	I0729 00:49:41.839396   17906 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2259b3e93c5dd9b5daf5a1af8e350826f214305256ac858c5baa518ad685cc90 \
	I0729 00:49:41.839415   17906 kubeadm.go:310] 	--control-plane 
	I0729 00:49:41.839421   17906 kubeadm.go:310] 
	I0729 00:49:41.839489   17906 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 00:49:41.839497   17906 kubeadm.go:310] 
	I0729 00:49:41.839580   17906 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4tz30c.7n1hf4yodd1tj9r8 \
	I0729 00:49:41.839687   17906 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2259b3e93c5dd9b5daf5a1af8e350826f214305256ac858c5baa518ad685cc90 
	I0729 00:49:41.840124   17906 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 00:49:41.840193   17906 cni.go:84] Creating CNI manager for ""
	I0729 00:49:41.840210   17906 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 00:49:41.841954   17906 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 00:49:41.843023   17906 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 00:49:41.854010   17906 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 00:49:41.872370   17906 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 00:49:41.872394   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:41.872456   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-657805 minikube.k8s.io/updated_at=2024_07_29T00_49_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=addons-657805 minikube.k8s.io/primary=true
	I0729 00:49:42.009398   17906 ops.go:34] apiserver oom_adj: -16
	I0729 00:49:42.009430   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:42.509447   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:43.010415   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:43.509514   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:44.009837   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:44.510078   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:45.009708   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:45.510416   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:46.009642   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:46.510214   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:47.010060   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:47.509694   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:48.009883   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:48.510148   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:49.010385   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:49.510249   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:50.010201   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:50.510062   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:51.009494   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:51.510074   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:52.009824   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:52.510433   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:53.009851   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:53.509613   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:54.009509   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:54.509822   17906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 00:49:54.615836   17906 kubeadm.go:1113] duration metric: took 12.743492105s to wait for elevateKubeSystemPrivileges
	I0729 00:49:54.615869   17906 kubeadm.go:394] duration metric: took 23.058408518s to StartCluster
	I0729 00:49:54.615888   17906 settings.go:142] acquiring lock: {Name:mkb5968d4cb7e70e3ab5eb9e0fafacd5f2b8ffad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:54.616017   17906 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 00:49:54.616486   17906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/kubeconfig: {Name:mkfc86149281a82bb07035a854bdc5c590b97078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:49:54.616685   17906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 00:49:54.616709   17906 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 00:49:54.616797   17906 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0729 00:49:54.616901   17906 config.go:182] Loaded profile config "addons-657805": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 00:49:54.616932   17906 addons.go:69] Setting yakd=true in profile "addons-657805"
	I0729 00:49:54.617194   17906 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-657805"
	I0729 00:49:54.617236   17906 addons.go:234] Setting addon yakd=true in "addons-657805"
	I0729 00:49:54.617241   17906 addons.go:69] Setting ingress-dns=true in profile "addons-657805"
	I0729 00:49:54.617274   17906 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-657805"
	I0729 00:49:54.617297   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.617313   17906 addons.go:234] Setting addon ingress-dns=true in "addons-657805"
	I0729 00:49:54.616963   17906 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-657805"
	I0729 00:49:54.617414   17906 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-657805"
	I0729 00:49:54.617452   17906 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-657805"
	I0729 00:49:54.617490   17906 addons.go:69] Setting registry=true in profile "addons-657805"
	I0729 00:49:54.617524   17906 addons.go:234] Setting addon registry=true in "addons-657805"
	I0729 00:49:54.617528   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.617550   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.617585   17906 addons.go:69] Setting ingress=true in profile "addons-657805"
	I0729 00:49:54.617639   17906 addons.go:234] Setting addon ingress=true in "addons-657805"
	I0729 00:49:54.617673   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.618028   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.616969   17906 addons.go:69] Setting metrics-server=true in profile "addons-657805"
	I0729 00:49:54.618123   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.618153   17906 addons.go:234] Setting addon metrics-server=true in "addons-657805"
	I0729 00:49:54.616954   17906 addons.go:69] Setting inspektor-gadget=true in profile "addons-657805"
	I0729 00:49:54.618193   17906 addons.go:234] Setting addon inspektor-gadget=true in "addons-657805"
	I0729 00:49:54.618216   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.618290   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.618297   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.618315   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.618372   17906 addons.go:69] Setting volumesnapshots=true in profile "addons-657805"
	I0729 00:49:54.618402   17906 addons.go:234] Setting addon volumesnapshots=true in "addons-657805"
	I0729 00:49:54.618431   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.618486   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.616946   17906 addons.go:69] Setting helm-tiller=true in profile "addons-657805"
	I0729 00:49:54.618556   17906 addons.go:234] Setting addon helm-tiller=true in "addons-657805"
	I0729 00:49:54.618628   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.618632   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.618665   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.618696   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.616938   17906 addons.go:69] Setting gcp-auth=true in profile "addons-657805"
	I0729 00:49:54.618721   17906 mustload.go:65] Loading cluster: addons-657805
	I0729 00:49:54.618867   17906 addons.go:69] Setting volcano=true in profile "addons-657805"
	I0729 00:49:54.618887   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.618894   17906 addons.go:234] Setting addon volcano=true in "addons-657805"
	I0729 00:49:54.618917   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.616971   17906 addons.go:69] Setting default-storageclass=true in profile "addons-657805"
	I0729 00:49:54.619319   17906 out.go:177] * Verifying Kubernetes components...
	I0729 00:49:54.619379   17906 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-657805"
	I0729 00:49:54.619416   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.619416   17906 addons.go:69] Setting storage-provisioner=true in profile "addons-657805"
	I0729 00:49:54.619441   17906 addons.go:234] Setting addon storage-provisioner=true in "addons-657805"
	I0729 00:49:54.619469   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.619817   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.619862   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.620070   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.620094   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.619335   17906 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-657805"
	I0729 00:49:54.620413   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.620442   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.620509   17906 config.go:182] Loaded profile config "addons-657805": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 00:49:54.620640   17906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 00:49:54.620881   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.620882   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.620906   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.621186   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.619339   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.618028   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.616961   17906 addons.go:69] Setting cloud-spanner=true in profile "addons-657805"
	I0729 00:49:54.626044   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.626080   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.626291   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.626313   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.631130   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.631177   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.631347   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.631377   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.631444   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.631524   17906 addons.go:234] Setting addon cloud-spanner=true in "addons-657805"
	I0729 00:49:54.631578   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.631639   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.653081   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46857
	I0729 00:49:54.653094   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37371
	I0729 00:49:54.653547   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44291
	I0729 00:49:54.653669   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.654159   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.654189   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.654390   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.654655   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.655230   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.655261   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.655466   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42143
	I0729 00:49:54.655597   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.655622   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.657877   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.657918   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.658326   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.658377   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.658391   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.658795   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.659166   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.659217   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.659397   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.659437   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.659866   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.659883   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.660236   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.660274   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I0729 00:49:54.660613   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45317
	I0729 00:49:54.666821   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37717
	I0729 00:49:54.667218   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I0729 00:49:54.667292   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33977
	I0729 00:49:54.667361   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46203
	I0729 00:49:54.667433   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45059
	I0729 00:49:54.667560   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.667584   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.667610   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.667730   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.667778   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.667978   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.668051   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.668097   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.668151   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.668189   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.669234   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.669250   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.669359   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.669367   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.669466   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.669474   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.669577   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.669588   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.669684   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.669694   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.669794   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.669802   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.669845   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.669878   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.669917   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40327
	I0729 00:49:54.670054   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.670090   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.670223   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.670948   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.670989   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.671022   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.671430   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.671459   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.671634   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.672039   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.672067   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.672971   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.673010   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.673442   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.673465   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.673647   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.673662   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.673837   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.673850   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.674047   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.674162   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.676137   17906 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-657805"
	I0729 00:49:54.676178   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.676516   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.676542   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.683409   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.683546   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.684008   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.684055   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.684612   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.684644   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.686887   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.689967   17906 addons.go:234] Setting addon default-storageclass=true in "addons-657805"
	I0729 00:49:54.690010   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:49:54.690350   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.690386   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.701270   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41605
	I0729 00:49:54.701759   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.702664   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.702683   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.703071   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.703248   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.706622   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.709177   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33115
	I0729 00:49:54.709672   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.709719   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
	I0729 00:49:54.710212   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.710233   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.710733   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.710908   17906 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0729 00:49:54.710918   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.711176   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.711309   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.711320   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.712192   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.712271   17906 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0729 00:49:54.712284   17906 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0729 00:49:54.712301   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.712474   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.713564   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.715211   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0729 00:49:54.715728   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.716344   17906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0729 00:49:54.716359   17906 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0729 00:49:54.716378   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.716419   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39209
	I0729 00:49:54.716567   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.717153   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.717182   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.717531   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.717676   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.717841   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.717958   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.718048   17906 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0729 00:49:54.718160   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.718881   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.718897   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.719185   17906 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 00:49:54.719207   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0729 00:49:54.719225   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.719417   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.719617   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.719823   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.720866   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.720892   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.721484   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.721667   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.721819   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.721957   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.722391   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35373
	I0729 00:49:54.722453   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43177
	I0729 00:49:54.722746   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.722787   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.723469   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.723485   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.723598   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.723608   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.723838   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.724246   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.724447   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.724507   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.725563   17906 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 00:49:54.726225   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0729 00:49:54.726490   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41995
	I0729 00:49:54.726704   17906 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 00:49:54.726718   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 00:49:54.726734   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.726805   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.727145   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46213
	I0729 00:49:54.727295   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41687
	I0729 00:49:54.727411   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.727489   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.727816   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.727834   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.727963   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.727976   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.728028   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.728253   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.728450   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.728503   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.728549   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.728566   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.728591   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42807
	I0729 00:49:54.728746   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.728844   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.728872   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.728912   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.729098   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.729116   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.729172   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.729188   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.729316   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.729332   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.729628   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.729666   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.729706   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.729763   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.729816   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.730064   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.730239   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.730509   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.731158   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.731347   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.731387   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.731499   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.731662   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.732130   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.732233   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.732252   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.732287   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.732753   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.732930   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.733130   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.733497   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.733926   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I0729 00:49:54.734330   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.734416   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.734735   17906 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0729 00:49:54.734745   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.734814   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.734828   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.734994   17906 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 00:49:54.735073   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.735873   17906 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0729 00:49:54.735887   17906 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0729 00:49:54.735905   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.736117   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.736153   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.736395   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0729 00:49:54.737204   17906 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0729 00:49:54.738763   17906 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 00:49:54.738778   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0729 00:49:54.738795   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.739245   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.739296   17906 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 00:49:54.739409   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0729 00:49:54.739948   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.739969   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.740297   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.740491   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.740867   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.741009   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.741752   17906 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0729 00:49:54.741833   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0729 00:49:54.742258   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.742653   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.742671   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.742924   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.743072   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.743355   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.743500   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.744122   17906 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 00:49:54.744136   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0729 00:49:54.744150   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.745444   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0729 00:49:54.746956   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0729 00:49:54.747444   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.748021   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.748040   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.749065   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.749276   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.749344   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0729 00:49:54.749440   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.749642   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.749900   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42093
	I0729 00:49:54.750182   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I0729 00:49:54.750809   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.751414   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.751436   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.751844   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0729 00:49:54.752098   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.753301   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.753337   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.753547   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38699
	I0729 00:49:54.753660   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42095
	I0729 00:49:54.753741   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43671
	I0729 00:49:54.754253   17906 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0729 00:49:54.754576   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.754660   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.754725   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.754781   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.755232   17906 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0729 00:49:54.755257   17906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0729 00:49:54.755274   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.755693   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.755704   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.755709   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.755720   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.755827   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.755837   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.756212   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.756250   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.756424   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.756818   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.756842   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.756862   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.756884   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.757042   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.757092   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.757820   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.758368   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:49:54.758404   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:49:54.758608   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34125
	I0729 00:49:54.758785   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.759074   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:54.759087   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:54.759260   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.759277   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:54.759294   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:54.759303   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:54.759310   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:54.759467   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:54.759478   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 00:49:54.759545   17906 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0729 00:49:54.759797   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.759815   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.760192   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.760479   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.761438   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.762526   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.762919   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.762937   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.762977   17906 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0729 00:49:54.763096   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.763211   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.763364   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.763516   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.763647   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.764648   17906 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0729 00:49:54.764773   17906 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0729 00:49:54.764786   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0729 00:49:54.764802   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.766165   17906 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 00:49:54.766185   17906 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 00:49:54.766203   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.769269   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.769708   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.769729   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.769991   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.770194   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.770372   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.770442   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.770673   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.770899   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.770915   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.771120   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.771292   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.771432   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.771588   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.774309   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40917
	I0729 00:49:54.774774   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.775249   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.775271   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.775561   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.775792   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.777250   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0729 00:49:54.777461   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.777612   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.777850   17906 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 00:49:54.777865   17906 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 00:49:54.777883   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.778070   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.778088   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.778460   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.778629   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.780645   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.780651   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46843
	I0729 00:49:54.781179   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.782248   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.782270   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.782287   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.782659   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.782692   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.782834   17906 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0729 00:49:54.782890   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.782952   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.783133   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.783236   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.783286   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.783470   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.784745   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44463
	I0729 00:49:54.784875   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.785238   17906 out.go:177]   - Using image docker.io/busybox:stable
	I0729 00:49:54.785281   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:49:54.785685   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:49:54.785698   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:49:54.785980   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:49:54.786156   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:49:54.786193   17906 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0729 00:49:54.786305   17906 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 00:49:54.786318   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0729 00:49:54.786328   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.787865   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:49:54.788470   17906 out.go:177]   - Using image docker.io/registry:2.8.3
	I0729 00:49:54.789271   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.789334   17906 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0729 00:49:54.789772   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.789806   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.789884   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.790015   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.790116   17906 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0729 00:49:54.790142   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0729 00:49:54.790163   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.790119   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.790289   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.790773   17906 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0729 00:49:54.790786   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0729 00:49:54.790798   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:49:54.793507   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.793778   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.793828   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.793846   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.794160   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.794328   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:49:54.794346   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:49:54.794444   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.794535   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:49:54.794617   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.794656   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:49:54.794716   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:54.794971   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:49:54.795112   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:49:55.072421   17906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 00:49:55.072485   17906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 00:49:55.116870   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 00:49:55.189267   17906 node_ready.go:35] waiting up to 6m0s for node "addons-657805" to be "Ready" ...
	I0729 00:49:55.193269   17906 node_ready.go:49] node "addons-657805" has status "Ready":"True"
	I0729 00:49:55.193292   17906 node_ready.go:38] duration metric: took 4.001508ms for node "addons-657805" to be "Ready" ...
	I0729 00:49:55.193300   17906 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 00:49:55.202676   17906 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sglhh" in "kube-system" namespace to be "Ready" ...
	I0729 00:49:55.265951   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 00:49:55.270689   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 00:49:55.284057   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 00:49:55.300524   17906 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0729 00:49:55.300555   17906 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0729 00:49:55.301832   17906 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0729 00:49:55.301882   17906 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0729 00:49:55.324027   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 00:49:55.339640   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 00:49:55.371991   17906 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0729 00:49:55.372023   17906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0729 00:49:55.377604   17906 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0729 00:49:55.377634   17906 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0729 00:49:55.377843   17906 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0729 00:49:55.377865   17906 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0729 00:49:55.406710   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0729 00:49:55.410123   17906 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0729 00:49:55.410150   17906 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0729 00:49:55.434351   17906 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 00:49:55.434377   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0729 00:49:55.511476   17906 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0729 00:49:55.511505   17906 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0729 00:49:55.526194   17906 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0729 00:49:55.526223   17906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0729 00:49:55.561940   17906 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0729 00:49:55.561962   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0729 00:49:55.582147   17906 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0729 00:49:55.582173   17906 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0729 00:49:55.608900   17906 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0729 00:49:55.608922   17906 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0729 00:49:55.616808   17906 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 00:49:55.616829   17906 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 00:49:55.655468   17906 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 00:49:55.655489   17906 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0729 00:49:55.755684   17906 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0729 00:49:55.755711   17906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0729 00:49:55.793529   17906 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0729 00:49:55.793565   17906 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0729 00:49:55.823152   17906 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 00:49:55.823173   17906 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 00:49:55.865314   17906 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0729 00:49:55.865342   17906 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0729 00:49:55.885258   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0729 00:49:55.897663   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 00:49:55.906400   17906 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0729 00:49:55.906418   17906 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0729 00:49:55.932495   17906 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0729 00:49:55.932516   17906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0729 00:49:55.985755   17906 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0729 00:49:55.985786   17906 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0729 00:49:55.997149   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 00:49:56.015636   17906 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0729 00:49:56.015659   17906 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0729 00:49:56.094125   17906 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0729 00:49:56.094149   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0729 00:49:56.151609   17906 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0729 00:49:56.151632   17906 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0729 00:49:56.263071   17906 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 00:49:56.263092   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0729 00:49:56.293802   17906 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0729 00:49:56.293824   17906 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0729 00:49:56.363683   17906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0729 00:49:56.363707   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0729 00:49:56.407530   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0729 00:49:56.523004   17906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0729 00:49:56.523028   17906 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0729 00:49:56.657695   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 00:49:56.704167   17906 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0729 00:49:56.704197   17906 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0729 00:49:56.948050   17906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0729 00:49:56.948083   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0729 00:49:57.068094   17906 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 00:49:57.068120   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0729 00:49:57.231102   17906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0729 00:49:57.231124   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0729 00:49:57.234328   17906 pod_ready.go:102] pod "coredns-7db6d8ff4d-sglhh" in "kube-system" namespace has status "Ready":"False"
	I0729 00:49:57.463110   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 00:49:57.589959   17906 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.517440919s)
	I0729 00:49:57.589995   17906 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 00:49:57.683519   17906 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 00:49:57.683549   17906 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0729 00:49:57.932817   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 00:49:58.115319   17906 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-657805" context rescaled to 1 replicas
	I0729 00:49:58.650909   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.534008215s)
	I0729 00:49:58.650968   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:58.650981   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:58.651294   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:58.651336   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:58.651363   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:58.651370   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:58.651376   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:58.651618   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:58.651625   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:58.651636   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:59.296341   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.030348008s)
	I0729 00:49:59.296395   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:59.296405   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:59.296408   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.025681314s)
	I0729 00:49:59.296433   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.012348778s)
	I0729 00:49:59.296460   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:59.296468   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:59.296477   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:59.296479   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:59.296873   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:59.296875   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:59.296884   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:59.296894   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:59.296876   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:59.296903   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:59.296905   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:59.296909   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:59.296910   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:59.296914   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:59.296914   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:59.296922   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:59.296930   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:59.296918   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:59.296972   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:59.297295   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:59.297300   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:59.297308   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:59.297315   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:59.297328   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:59.297335   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:59.297365   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:59.297384   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:59.297391   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:59.310397   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:49:59.310429   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:49:59.310678   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:49:59.310737   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:49:59.310753   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:49:59.708188   17906 pod_ready.go:102] pod "coredns-7db6d8ff4d-sglhh" in "kube-system" namespace has status "Ready":"False"
	I0729 00:50:00.790612   17906 pod_ready.go:92] pod "coredns-7db6d8ff4d-sglhh" in "kube-system" namespace has status "Ready":"True"
	I0729 00:50:00.790636   17906 pod_ready.go:81] duration metric: took 5.587932348s for pod "coredns-7db6d8ff4d-sglhh" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:00.790645   17906 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t65vz" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:00.863444   17906 pod_ready.go:92] pod "coredns-7db6d8ff4d-t65vz" in "kube-system" namespace has status "Ready":"True"
	I0729 00:50:00.863479   17906 pod_ready.go:81] duration metric: took 72.826436ms for pod "coredns-7db6d8ff4d-t65vz" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:00.863492   17906 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-657805" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:00.928817   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.604750306s)
	I0729 00:50:00.928874   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:00.928889   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:00.929296   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:00.929304   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:00.929317   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:00.929327   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:00.929335   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:00.929577   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:00.929627   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:00.929645   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:00.947041   17906 pod_ready.go:92] pod "etcd-addons-657805" in "kube-system" namespace has status "Ready":"True"
	I0729 00:50:00.947076   17906 pod_ready.go:81] duration metric: took 83.574911ms for pod "etcd-addons-657805" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:00.947089   17906 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-657805" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:01.013509   17906 pod_ready.go:92] pod "kube-apiserver-addons-657805" in "kube-system" namespace has status "Ready":"True"
	I0729 00:50:01.013529   17906 pod_ready.go:81] duration metric: took 66.432029ms for pod "kube-apiserver-addons-657805" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:01.013538   17906 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-657805" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:01.030554   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:01.030576   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:01.030895   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:01.030898   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:01.030923   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:01.055189   17906 pod_ready.go:92] pod "kube-controller-manager-addons-657805" in "kube-system" namespace has status "Ready":"True"
	I0729 00:50:01.055214   17906 pod_ready.go:81] duration metric: took 41.669652ms for pod "kube-controller-manager-addons-657805" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:01.055224   17906 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kvp86" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:01.112577   17906 pod_ready.go:92] pod "kube-proxy-kvp86" in "kube-system" namespace has status "Ready":"True"
	I0729 00:50:01.112598   17906 pod_ready.go:81] duration metric: took 57.368109ms for pod "kube-proxy-kvp86" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:01.112606   17906 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-657805" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:01.537549   17906 pod_ready.go:92] pod "kube-scheduler-addons-657805" in "kube-system" namespace has status "Ready":"True"
	I0729 00:50:01.537576   17906 pod_ready.go:81] duration metric: took 424.963454ms for pod "kube-scheduler-addons-657805" in "kube-system" namespace to be "Ready" ...
	I0729 00:50:01.537585   17906 pod_ready.go:38] duration metric: took 6.344275005s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 00:50:01.537600   17906 api_server.go:52] waiting for apiserver process to appear ...
	I0729 00:50:01.537656   17906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 00:50:01.747973   17906 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0729 00:50:01.748011   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:50:01.750727   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:50:01.751237   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:50:01.751270   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:50:01.751465   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:50:01.751651   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:50:01.751855   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:50:01.752049   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:50:02.526757   17906 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0729 00:50:02.737600   17906 addons.go:234] Setting addon gcp-auth=true in "addons-657805"
	I0729 00:50:02.737663   17906 host.go:66] Checking if "addons-657805" exists ...
	I0729 00:50:02.737987   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:50:02.738017   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:50:02.753382   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42573
	I0729 00:50:02.753748   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:50:02.754229   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:50:02.754251   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:50:02.754636   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:50:02.755289   17906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 00:50:02.755321   17906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 00:50:02.771176   17906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44671
	I0729 00:50:02.771700   17906 main.go:141] libmachine: () Calling .GetVersion
	I0729 00:50:02.772325   17906 main.go:141] libmachine: Using API Version  1
	I0729 00:50:02.772350   17906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 00:50:02.772839   17906 main.go:141] libmachine: () Calling .GetMachineName
	I0729 00:50:02.773045   17906 main.go:141] libmachine: (addons-657805) Calling .GetState
	I0729 00:50:02.774969   17906 main.go:141] libmachine: (addons-657805) Calling .DriverName
	I0729 00:50:02.775235   17906 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0729 00:50:02.775259   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHHostname
	I0729 00:50:02.777809   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:50:02.778224   17906 main.go:141] libmachine: (addons-657805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:86:06", ip: ""} in network mk-addons-657805: {Iface:virbr1 ExpiryTime:2024-07-29 01:49:14 +0000 UTC Type:0 Mac:52:54:00:fe:86:06 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:addons-657805 Clientid:01:52:54:00:fe:86:06}
	I0729 00:50:02.778252   17906 main.go:141] libmachine: (addons-657805) DBG | domain addons-657805 has defined IP address 192.168.39.18 and MAC address 52:54:00:fe:86:06 in network mk-addons-657805
	I0729 00:50:02.778405   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHPort
	I0729 00:50:02.778582   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHKeyPath
	I0729 00:50:02.778725   17906 main.go:141] libmachine: (addons-657805) Calling .GetSSHUsername
	I0729 00:50:02.778879   17906 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/addons-657805/id_rsa Username:docker}
	I0729 00:50:03.903800   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.564128622s)
	I0729 00:50:03.903845   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.497111713s)
	I0729 00:50:03.903857   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.903865   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.903869   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.903874   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.903931   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.018635942s)
	I0729 00:50:03.903950   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.006260506s)
	I0729 00:50:03.903966   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.903977   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.903977   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.903990   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.904080   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.906894259s)
	I0729 00:50:03.904114   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.904127   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.904226   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.496666681s)
	I0729 00:50:03.904246   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.904255   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.904282   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.904303   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.904315   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.904321   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.904326   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.904331   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.904336   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.904340   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.904345   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.904307   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.904391   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.904410   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.904416   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.904418   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.246694443s)
	W0729 00:50:03.904447   17906 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 00:50:03.904465   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.904477   17906 retry.go:31] will retry after 316.845474ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 00:50:03.904480   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.904424   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.904491   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.904496   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.904498   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.904634   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.441491896s)
	I0729 00:50:03.904667   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.904675   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.904701   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.904732   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.904739   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.904748   17906 addons.go:475] Verifying addon registry=true in "addons-657805"
	I0729 00:50:03.905820   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.905879   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.905888   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.905973   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.905998   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.906005   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.906012   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.906019   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.906186   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.906222   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.906230   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.906238   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.906245   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.906385   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.906414   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.906421   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.906796   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.906843   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.906854   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.906863   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:03.906873   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:03.906929   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.906957   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.906965   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.906973   17906 addons.go:475] Verifying addon metrics-server=true in "addons-657805"
	I0729 00:50:03.908227   17906 out.go:177] * Verifying registry addon...
	I0729 00:50:03.909125   17906 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-657805 service yakd-dashboard -n yakd-dashboard
	
	I0729 00:50:03.908373   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.908395   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.909632   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.908431   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.908447   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:03.908464   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.909696   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.908484   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:03.909743   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:03.909750   17906 addons.go:475] Verifying addon ingress=true in "addons-657805"
	I0729 00:50:03.910641   17906 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0729 00:50:03.910856   17906 out.go:177] * Verifying ingress addon...
	I0729 00:50:03.912521   17906 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0729 00:50:03.918982   17906 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0729 00:50:03.919003   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:03.922912   17906 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0729 00:50:03.922927   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:04.222285   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 00:50:04.415637   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:04.418129   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:04.933552   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:04.934438   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:05.282528   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.349654427s)
	I0729 00:50:05.282544   17906 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.744870221s)
	I0729 00:50:05.282600   17906 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.507342053s)
	I0729 00:50:05.282621   17906 api_server.go:72] duration metric: took 10.665881371s to wait for apiserver process to appear ...
	I0729 00:50:05.282640   17906 api_server.go:88] waiting for apiserver healthz status ...
	I0729 00:50:05.282662   17906 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0729 00:50:05.282585   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:05.282749   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:05.283122   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:05.283143   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:05.283153   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:05.283163   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:05.283180   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:05.283401   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:05.283423   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:05.283434   17906 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-657805"
	I0729 00:50:05.284333   17906 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 00:50:05.285262   17906 out.go:177] * Verifying csi-hostpath-driver addon...
	I0729 00:50:05.287109   17906 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0729 00:50:05.287812   17906 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0729 00:50:05.288457   17906 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0729 00:50:05.288476   17906 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0729 00:50:05.294716   17906 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I0729 00:50:05.302957   17906 api_server.go:141] control plane version: v1.30.3
	I0729 00:50:05.302988   17906 api_server.go:131] duration metric: took 20.339506ms to wait for apiserver health ...
	I0729 00:50:05.302998   17906 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 00:50:05.310100   17906 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0729 00:50:05.310124   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:05.332140   17906 system_pods.go:59] 19 kube-system pods found
	I0729 00:50:05.332166   17906 system_pods.go:61] "coredns-7db6d8ff4d-sglhh" [3b1ee481-ea1f-4fd0-8b99-531a84047e07] Running
	I0729 00:50:05.332171   17906 system_pods.go:61] "coredns-7db6d8ff4d-t65vz" [ad130721-0b7d-4bfe-ac45-f7f12f0815b5] Running
	I0729 00:50:05.332178   17906 system_pods.go:61] "csi-hostpath-attacher-0" [3ae11817-81ae-4f2a-ab6f-60451af82417] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 00:50:05.332182   17906 system_pods.go:61] "csi-hostpath-resizer-0" [83f41608-3bd5-43db-90b4-3e748933f87f] Pending
	I0729 00:50:05.332187   17906 system_pods.go:61] "csi-hostpathplugin-xcdz6" [8cc92d3f-35c2-4eca-9b3d-065617a32154] Pending
	I0729 00:50:05.332190   17906 system_pods.go:61] "etcd-addons-657805" [e295d075-78a7-46b3-beaa-419b4195a7ae] Running
	I0729 00:50:05.332193   17906 system_pods.go:61] "kube-apiserver-addons-657805" [bdea928e-5e23-4f0c-8bd4-a2027d562a62] Running
	I0729 00:50:05.332196   17906 system_pods.go:61] "kube-controller-manager-addons-657805" [28699945-1451-442f-b75d-55c7de3e3b54] Running
	I0729 00:50:05.332202   17906 system_pods.go:61] "kube-ingress-dns-minikube" [a3d38178-b58f-4c20-aa2c-a333b13ba547] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0729 00:50:05.332206   17906 system_pods.go:61] "kube-proxy-kvp86" [5c3ed19f-0d2a-46bd-89eb-d31fa88a3ea0] Running
	I0729 00:50:05.332209   17906 system_pods.go:61] "kube-scheduler-addons-657805" [04d2e84b-63d7-4b48-a55d-bf912e2acc15] Running
	I0729 00:50:05.332214   17906 system_pods.go:61] "metrics-server-c59844bb4-5pktj" [f3d59e24-fa87-4a81-a526-dd3281cc933f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 00:50:05.332219   17906 system_pods.go:61] "nvidia-device-plugin-daemonset-q9787" [88e23009-4d91-4d63-b0ed-514cd85efcad] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0729 00:50:05.332227   17906 system_pods.go:61] "registry-656c9c8d9c-vvt4p" [c2c15540-cbdd-4d9d-93ee-242fed10a376] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 00:50:05.332234   17906 system_pods.go:61] "registry-proxy-4dnlr" [776b01e7-fab4-4418-bc4f-350a057e9cd4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 00:50:05.332240   17906 system_pods.go:61] "snapshot-controller-745499f584-7bgm5" [54414c56-b0fd-4b67-9109-d0caf1d9d941] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 00:50:05.332248   17906 system_pods.go:61] "snapshot-controller-745499f584-qtkvv" [4af9fa15-7f2e-4444-acd5-000dae3daf9b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 00:50:05.332252   17906 system_pods.go:61] "storage-provisioner" [52e2a3d2-506b-440e-b1e3-485de0fe81e5] Running
	I0729 00:50:05.332258   17906 system_pods.go:61] "tiller-deploy-6677d64bcd-ctj2p" [19ff6eb3-431f-4705-9f70-09fb802cccd1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 00:50:05.332265   17906 system_pods.go:74] duration metric: took 29.260919ms to wait for pod list to return data ...
	I0729 00:50:05.332274   17906 default_sa.go:34] waiting for default service account to be created ...
	I0729 00:50:05.344177   17906 default_sa.go:45] found service account: "default"
	I0729 00:50:05.344202   17906 default_sa.go:55] duration metric: took 11.92175ms for default service account to be created ...
	I0729 00:50:05.344211   17906 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 00:50:05.365241   17906 system_pods.go:86] 19 kube-system pods found
	I0729 00:50:05.365271   17906 system_pods.go:89] "coredns-7db6d8ff4d-sglhh" [3b1ee481-ea1f-4fd0-8b99-531a84047e07] Running
	I0729 00:50:05.365277   17906 system_pods.go:89] "coredns-7db6d8ff4d-t65vz" [ad130721-0b7d-4bfe-ac45-f7f12f0815b5] Running
	I0729 00:50:05.365284   17906 system_pods.go:89] "csi-hostpath-attacher-0" [3ae11817-81ae-4f2a-ab6f-60451af82417] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 00:50:05.365289   17906 system_pods.go:89] "csi-hostpath-resizer-0" [83f41608-3bd5-43db-90b4-3e748933f87f] Pending
	I0729 00:50:05.365295   17906 system_pods.go:89] "csi-hostpathplugin-xcdz6" [8cc92d3f-35c2-4eca-9b3d-065617a32154] Pending
	I0729 00:50:05.365299   17906 system_pods.go:89] "etcd-addons-657805" [e295d075-78a7-46b3-beaa-419b4195a7ae] Running
	I0729 00:50:05.365303   17906 system_pods.go:89] "kube-apiserver-addons-657805" [bdea928e-5e23-4f0c-8bd4-a2027d562a62] Running
	I0729 00:50:05.365308   17906 system_pods.go:89] "kube-controller-manager-addons-657805" [28699945-1451-442f-b75d-55c7de3e3b54] Running
	I0729 00:50:05.365315   17906 system_pods.go:89] "kube-ingress-dns-minikube" [a3d38178-b58f-4c20-aa2c-a333b13ba547] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0729 00:50:05.365320   17906 system_pods.go:89] "kube-proxy-kvp86" [5c3ed19f-0d2a-46bd-89eb-d31fa88a3ea0] Running
	I0729 00:50:05.365325   17906 system_pods.go:89] "kube-scheduler-addons-657805" [04d2e84b-63d7-4b48-a55d-bf912e2acc15] Running
	I0729 00:50:05.365330   17906 system_pods.go:89] "metrics-server-c59844bb4-5pktj" [f3d59e24-fa87-4a81-a526-dd3281cc933f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 00:50:05.365341   17906 system_pods.go:89] "nvidia-device-plugin-daemonset-q9787" [88e23009-4d91-4d63-b0ed-514cd85efcad] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0729 00:50:05.365365   17906 system_pods.go:89] "registry-656c9c8d9c-vvt4p" [c2c15540-cbdd-4d9d-93ee-242fed10a376] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 00:50:05.365376   17906 system_pods.go:89] "registry-proxy-4dnlr" [776b01e7-fab4-4418-bc4f-350a057e9cd4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 00:50:05.365383   17906 system_pods.go:89] "snapshot-controller-745499f584-7bgm5" [54414c56-b0fd-4b67-9109-d0caf1d9d941] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 00:50:05.365389   17906 system_pods.go:89] "snapshot-controller-745499f584-qtkvv" [4af9fa15-7f2e-4444-acd5-000dae3daf9b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 00:50:05.365395   17906 system_pods.go:89] "storage-provisioner" [52e2a3d2-506b-440e-b1e3-485de0fe81e5] Running
	I0729 00:50:05.365402   17906 system_pods.go:89] "tiller-deploy-6677d64bcd-ctj2p" [19ff6eb3-431f-4705-9f70-09fb802cccd1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0729 00:50:05.365410   17906 system_pods.go:126] duration metric: took 21.193907ms to wait for k8s-apps to be running ...
	I0729 00:50:05.365419   17906 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 00:50:05.365460   17906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 00:50:05.414157   17906 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0729 00:50:05.414181   17906 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0729 00:50:05.426390   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:05.431331   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:05.498476   17906 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 00:50:05.498498   17906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0729 00:50:05.654636   17906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 00:50:05.793389   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:05.921994   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:05.925658   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:06.022554   17906 system_svc.go:56] duration metric: took 657.12426ms WaitForService to wait for kubelet
	I0729 00:50:06.022582   17906 kubeadm.go:582] duration metric: took 11.405844626s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 00:50:06.022600   17906 node_conditions.go:102] verifying NodePressure condition ...
	I0729 00:50:06.022715   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.800385788s)
	I0729 00:50:06.022761   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:06.022778   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:06.023053   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:06.023137   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:06.023151   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:06.023160   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:06.023165   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:06.023454   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:06.023469   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:06.023455   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:06.025878   17906 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 00:50:06.025898   17906 node_conditions.go:123] node cpu capacity is 2
	I0729 00:50:06.025908   17906 node_conditions.go:105] duration metric: took 3.30242ms to run NodePressure ...
	I0729 00:50:06.025918   17906 start.go:241] waiting for startup goroutines ...
	I0729 00:50:06.293601   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:06.416586   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:06.422765   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:06.793842   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:06.917666   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:06.929141   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:07.310214   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:07.447937   17906 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.793264102s)
	I0729 00:50:07.447985   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:07.447995   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:07.448280   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:07.448301   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:07.448317   17906 main.go:141] libmachine: Making call to close driver server
	I0729 00:50:07.448326   17906 main.go:141] libmachine: (addons-657805) Calling .Close
	I0729 00:50:07.448591   17906 main.go:141] libmachine: Successfully made call to close driver server
	I0729 00:50:07.448631   17906 main.go:141] libmachine: (addons-657805) DBG | Closing plugin on server side
	I0729 00:50:07.448649   17906 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 00:50:07.450324   17906 addons.go:475] Verifying addon gcp-auth=true in "addons-657805"
	I0729 00:50:07.452164   17906 out.go:177] * Verifying gcp-auth addon...
	I0729 00:50:07.454823   17906 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0729 00:50:07.455167   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:07.455290   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:07.463873   17906 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 00:50:07.463892   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:07.793624   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:07.915107   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:07.917728   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:07.960039   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:08.292831   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:08.415231   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:08.417478   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:08.458109   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:08.793178   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:08.916001   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:08.916234   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:08.958929   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:09.294729   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:09.415465   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:09.416985   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:09.458654   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:09.794587   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:09.915367   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:09.918264   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:09.959070   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:10.293393   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:10.415968   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:10.418556   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:10.459356   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:10.794129   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:10.915321   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:10.917847   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:10.958953   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:11.300040   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:11.415937   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:11.416270   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:11.459562   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:11.800657   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:11.914921   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:11.917297   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:11.959574   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:12.294109   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:12.416589   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:12.418876   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:12.459575   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:12.793518   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:12.916046   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:12.916170   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:12.958847   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:13.293497   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:13.415135   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:13.417448   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:13.458942   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:13.793113   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:13.915240   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:13.917375   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:13.959296   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:14.293234   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:14.417093   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:14.417800   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:14.461014   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:14.793957   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:14.916945   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:14.917474   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:14.958160   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:15.295795   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:15.415615   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:15.417663   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:15.458726   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:15.794791   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:15.914927   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:15.917658   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:15.958686   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:16.294618   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:16.416010   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:16.418278   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:16.461130   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:16.794291   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:16.915839   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:16.917011   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:16.958609   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:17.294108   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:17.415621   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:17.418682   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:17.458713   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:17.794111   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:17.915813   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:17.918256   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:17.959042   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:18.293436   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:18.416250   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:18.419149   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:18.458300   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:18.794227   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:19.164722   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:19.165055   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:19.171163   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:19.293562   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:19.418728   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:19.418904   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:19.458542   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:19.794246   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:19.916056   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:19.918930   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:19.958451   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:20.293317   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:20.416685   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:20.416723   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:20.459317   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:20.793622   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:20.916769   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:20.917213   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:20.958068   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:21.389664   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:21.415553   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:21.605464   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:21.607940   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:21.793475   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:21.917011   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:21.918764   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:21.958181   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:22.293434   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:22.416380   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:22.416996   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:22.458759   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:22.795087   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:22.917084   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:22.917538   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:22.959903   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:23.293798   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:23.416124   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:23.417160   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:23.459509   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:23.795149   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:23.917065   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:23.917182   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:23.959183   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:24.293262   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:24.416084   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:24.420910   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:24.462665   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:24.792992   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:24.916880   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:24.924929   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:24.959078   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:25.294236   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:25.420902   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:25.421035   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:25.462446   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:25.793552   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:25.916724   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:25.919289   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:25.959341   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:26.293642   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:26.415660   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:26.417557   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:26.458763   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:26.794029   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:26.920985   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:26.921307   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:26.959003   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:27.293478   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:27.418196   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:27.428104   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:27.459191   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:27.795671   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:27.915489   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:27.916747   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:27.958825   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:28.293589   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:28.415504   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:28.416988   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:28.458858   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:28.792919   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:28.918390   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:28.918526   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:28.958222   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:29.293578   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:29.416206   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:29.417396   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:29.458847   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:29.795200   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:29.916876   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:29.916971   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:29.958999   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:30.293375   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:30.416241   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:30.417575   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:30.460086   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:30.794296   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:30.915960   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:30.916394   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:30.958956   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:31.293266   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:31.415865   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:31.417415   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:31.458369   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:31.793919   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:31.916660   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:31.919139   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:31.958484   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:32.293775   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:32.415211   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:32.416945   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:32.458924   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:32.793860   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:32.916683   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:32.918648   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:32.958935   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:33.293861   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:33.417304   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:33.418381   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:33.465257   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:33.794074   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:33.915644   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:33.918283   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:33.959077   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:34.296798   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:34.416340   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:34.419809   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:34.458745   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:34.793459   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:34.915002   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:34.916330   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:34.958562   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:35.294317   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:35.415501   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:35.417008   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:35.458614   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:35.794384   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:35.916392   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:35.924541   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:35.957844   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:36.293170   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:36.416604   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:36.417782   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:36.458463   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:36.793450   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:36.916337   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:36.918492   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:36.957947   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:37.293724   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:37.415291   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:37.418054   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:37.459046   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:37.803053   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:37.915500   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:37.917916   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:37.958302   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:38.293994   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:38.416247   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:38.417789   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:38.458688   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:38.795144   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:38.916731   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:38.917671   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:38.958612   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:39.294623   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:39.417705   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:39.420751   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:39.458555   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:39.793844   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:39.916525   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:39.918126   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:39.958362   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:40.311690   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:40.417256   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:40.420424   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:40.459721   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:40.795394   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:40.917614   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:40.919326   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:40.959015   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:41.294555   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:41.415712   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 00:50:41.416589   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:41.458416   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:41.794711   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:41.919512   17906 kapi.go:107] duration metric: took 38.008865162s to wait for kubernetes.io/minikube-addons=registry ...
	I0729 00:50:41.921418   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:41.958150   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:42.293731   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:42.417551   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:42.460586   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:42.813599   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:42.918442   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:42.957997   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:43.293429   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:43.417485   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:43.458940   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:43.792996   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:43.917694   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:43.958772   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:44.293552   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:44.417081   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:44.458612   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:44.793858   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:44.917862   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:44.959330   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:45.294287   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:45.417648   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:45.459006   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:45.796156   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:45.917416   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:45.960023   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:46.294992   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:46.418226   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:46.459597   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:46.795845   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:46.916916   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:46.958855   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:47.295236   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:47.417167   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:47.458611   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:47.967630   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:47.967919   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:47.969948   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:48.294251   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:48.416434   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:48.459155   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:48.794070   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:48.917450   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:48.959205   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:49.296441   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:49.416862   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:49.458740   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:49.793716   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:49.916832   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:49.958657   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:50.294215   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:50.417426   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:50.458940   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:50.794139   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:50.917446   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:50.958240   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:51.295765   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:51.417116   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:51.459661   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:51.798611   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:51.916774   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:51.959051   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:52.293663   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:52.416862   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:52.458230   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:52.793727   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:52.917266   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:52.958676   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:53.293738   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:53.418428   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:53.458865   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:54.090696   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:54.091307   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:54.091692   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:54.293911   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:54.416610   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:54.458318   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:54.801567   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:54.918057   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:54.958830   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:55.294624   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:55.417036   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:55.458698   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:55.794444   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:55.917857   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:55.957973   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:56.293175   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:56.417919   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:56.457952   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:56.792902   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:56.917274   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:56.959315   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:57.293635   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:57.416881   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:57.458743   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:57.795626   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:57.917082   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:57.958546   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:58.293663   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:58.416844   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:58.458328   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:58.793336   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:58.916763   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:58.957845   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:59.294576   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:59.419024   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:59.460116   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:50:59.793125   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:50:59.918067   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:50:59.959162   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:00.293544   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:00.417057   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:00.458755   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:00.794333   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:00.917099   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:00.959092   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:01.293812   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:01.417283   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:01.458552   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:01.793728   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:01.930231   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:01.959014   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:02.299911   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:02.416988   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:02.458064   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:02.794059   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:02.917891   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:02.959016   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:03.293712   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:03.418613   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:03.458661   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:03.794551   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:03.917325   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:03.959128   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:04.294619   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:04.418199   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:04.458448   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:04.800017   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:04.917067   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:04.960313   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:05.293286   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:05.417749   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:05.458251   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:05.793386   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:05.916777   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:05.958942   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:06.295168   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:06.417038   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:06.458770   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:06.794453   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:06.917259   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:06.959295   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:07.294293   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:07.417836   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:07.458733   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:07.805483   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:07.927910   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:07.959337   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:08.294340   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:08.417346   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:08.459422   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:08.793856   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:08.916594   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:08.972481   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:09.294970   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:09.417678   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:09.458336   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:09.793489   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:09.917613   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:09.958195   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:10.293407   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:10.417413   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:10.458542   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:10.794041   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:10.917108   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:10.958597   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:11.295409   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:11.416559   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:11.458522   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:11.794666   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:11.916970   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:11.958754   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:12.299614   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:12.416955   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:12.458964   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:12.794012   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:12.916925   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:12.958168   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:13.294832   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:13.417866   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:13.458888   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:13.898708   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:13.918625   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:13.958107   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:14.294154   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:14.416977   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:14.458640   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:14.794358   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:14.918356   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:14.960056   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:15.293598   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:15.438874   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:15.460177   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:15.793459   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:15.919345   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:15.959414   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:16.294615   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:16.417044   17906 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 00:51:16.458298   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:16.794143   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:16.917157   17906 kapi.go:107] duration metric: took 1m13.004632603s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0729 00:51:16.959659   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:17.294353   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:17.459126   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:17.793568   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:17.958100   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:18.293319   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:18.459162   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:18.793370   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:18.959346   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:19.294109   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:19.458944   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:19.793988   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:19.958548   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:20.295678   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:20.458527   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:20.794127   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:20.960104   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:21.293907   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:21.458666   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:21.940387   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:21.958287   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:22.293457   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:22.462578   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 00:51:22.796470   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:22.970039   17906 kapi.go:107] duration metric: took 1m15.515214827s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0729 00:51:22.971993   17906 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-657805 cluster.
	I0729 00:51:22.973370   17906 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0729 00:51:22.974619   17906 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0729 00:51:23.293497   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:23.797195   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:24.294180   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:24.796404   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:25.295601   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:25.792253   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:26.303615   17906 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 00:51:26.793248   17906 kapi.go:107] duration metric: took 1m21.50543284s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0729 00:51:26.795037   17906 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, cloud-spanner, metrics-server, inspektor-gadget, helm-tiller, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0729 00:51:26.796349   17906 addons.go:510] duration metric: took 1m32.179554034s for enable addons: enabled=[ingress-dns storage-provisioner nvidia-device-plugin default-storageclass storage-provisioner-rancher cloud-spanner metrics-server inspektor-gadget helm-tiller yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0729 00:51:26.796384   17906 start.go:246] waiting for cluster config update ...
	I0729 00:51:26.796400   17906 start.go:255] writing updated cluster config ...
	I0729 00:51:26.796623   17906 ssh_runner.go:195] Run: rm -f paused
	I0729 00:51:26.851922   17906 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 00:51:26.853550   17906 out.go:177] * Done! kubectl is now configured to use "addons-657805" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 00:56:27 addons-657805 crio[683]: time="2024-07-29 00:56:27.977356641Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28351863-9408-4176-94f5-2813bd09ea20 name=/runtime.v1.RuntimeService/Version
	Jul 29 00:56:27 addons-657805 crio[683]: time="2024-07-29 00:56:27.978649872Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5153c87-fc60-4c5e-8cf0-08dce4de6534 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 00:56:27 addons-657805 crio[683]: time="2024-07-29 00:56:27.980195781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722214587980171369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5153c87-fc60-4c5e-8cf0-08dce4de6534 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 00:56:27 addons-657805 crio[683]: time="2024-07-29 00:56:27.980848194Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=551bcfc7-e169-44c9-ac5c-bee4ae6b1b7d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:56:27 addons-657805 crio[683]: time="2024-07-29 00:56:27.980921590Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=551bcfc7-e169-44c9-ac5c-bee4ae6b1b7d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:56:27 addons-657805 crio[683]: time="2024-07-29 00:56:27.981186070Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8da8b71c903c723f54ed22dd69ce83e634302237cfae0bc7c48c99b938a1a4ed,PodSandboxId:bc6ffcaf0af239d8d14f78e351a39414fd51893409b017d1671c33e37bd2e7a2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722214463368257518,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-srwb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f8c5130-5429-4ba4-b0bc-d64604463eea,},Annotations:map[string]string{io.kubernetes.container.hash: 507698f7,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f979ed0d59ceb0a3fe77e8a588fbdc216b780f146296362aee81474baf8b7b,PodSandboxId:21a7602a7970038680b9100c741077a59c15c08952ac1aa1e531ec0f3b591df4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722214323785218341,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ccff4b-e3a4-41d4-bd1e-f88c2e6fb79c,},Annotations:map[string]string{io.kubernet
es.container.hash: 565a93f8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da001d6bccbdef17d498eac5a7a0a1ba32eb0f73114e28c343fe3978772f304e,PodSandboxId:6ef848877b3decc1e1a0be43f7bd078e9cf623b71b4fb933ba98bfa7398e213e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722214290834977914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c2321700-98ca-4fb6-8
2af-408086cad702,},Annotations:map[string]string{io.kubernetes.container.hash: 45ecdcff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8c616404fea7e5342f50b9e6045edaa77cc2c28a38474865a7ed3c3f794138,PodSandboxId:8c9063fe62de5995ec787434cc30364ade60d57ed29e2a0ca9197a8ad5b33425,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722214234127740443,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5pktj,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: f3d59e24-fa87-4a81-a526-dd3281cc933f,},Annotations:map[string]string{io.kubernetes.container.hash: cece9cea,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e58106e1d28009e5317c0c8b0c0511dbc63cfc12df5326a4fc50e59342362f7,PodSandboxId:e10f3dcd91eed412142efed9b886a39ea8fd253ee07d5f98f59adab09704ec6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722214200906437831,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e2a3d2-506b-440e-b1e3-485de0fe81e5,},Annotations:map[string]string{io.kubernetes.container.hash: 89fbe4b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da92967e70eba7d7e1043f0dc9ca2c2df5b5218f209a76240b62e3d6fac7526b,PodSandboxId:0b021da77f062e78b72303aa3f380b3eedfde35a79146bc2e572d4a0dd4f7363,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722214198348601832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-sglhh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b1ee481-ea1f-4fd0-8b99-531a84047e07,},Annotations:map[string]string{io.kubernetes.container.hash: 84f9508e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef538b61c48a86bae81638d1752ecce8c820d316d178e6e52f8faf3a7d15e245,PodSandboxId:750fec69cfa8773120254d9d275b455e4b3a8f7e7f8a6defcd9ac68dc92385ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722214195736266949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kvp86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3ed19f-0d2a-46bd-89eb-d31fa88a3ea0,},Annotations:map[string]string{io.kubernetes.container.hash: dd20b3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebe4fbe2afb49ca9058feea4070393aa1fa31206b6877a61b6a9f8184d40346f,PodSandboxId:3f36a2408aceaabeb96ec3f00d2d37bd40ce1860d24c57725e757501a1f5fbb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e
5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722214175872105607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d4aab67e7f7f6474899aba0076081c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219ee84cb547903968b7a45cad9827f7a37d3dbbbcb50dfd16d456392c1aea67,PodSandboxId:9987febd0df8ec4413f510c772d1a2221aef49aee25ae509fb39843517ed1f50,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1722214175855725684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5ea8da4ad09fe4ac784bd8378d5702,},Annotations:map[string]string{io.kubernetes.container.hash: 2326aee3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ba32aabad2a7c3ecaeadb54b5c6c29a332c87a5d9a00cc327d2a74154f1dde,PodSandboxId:63690cadeb68bf946669f7f55646382345e48e4c54426ea3792ac417895d159a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:17222141758606
49173,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362308c5f6f5ccd5eeb5a4c232a54105,},Annotations:map[string]string{io.kubernetes.container.hash: fe954fd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb14a7eb5f1001bbd63d7b21d415b3af85d03ab4f5282396b474724e9d69b206,PodSandboxId:0ea5272a1942103b034e147d6925aa1c232e9aa45c7390e06599c5d1c4fb4a2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722214175785411192,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58a344d0d12b0547d54b3f03ac2afd2e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=551bcfc7-e169-44c9-ac5c-bee4ae6b1b7d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.011801894Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=095f582f-637c-466a-b389-bc3bcc406072 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.012106457Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:bc6ffcaf0af239d8d14f78e351a39414fd51893409b017d1671c33e37bd2e7a2,Metadata:&PodSandboxMetadata{Name:hello-world-app-6778b5fc9f-srwb4,Uid:3f8c5130-5429-4ba4-b0bc-d64604463eea,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722214460632561916,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-srwb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f8c5130-5429-4ba4-b0bc-d64604463eea,pod-template-hash: 6778b5fc9f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T00:54:20.322025833Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:21a7602a7970038680b9100c741077a59c15c08952ac1aa1e531ec0f3b591df4,Metadata:&PodSandboxMetadata{Name:nginx,Uid:29ccff4b-e3a4-41d4-bd1e-f88c2e6fb79c,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1722214317564428763,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ccff4b-e3a4-41d4-bd1e-f88c2e6fb79c,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T00:51:57.256847790Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6ef848877b3decc1e1a0be43f7bd078e9cf623b71b4fb933ba98bfa7398e213e,Metadata:&PodSandboxMetadata{Name:busybox,Uid:c2321700-98ca-4fb6-82af-408086cad702,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722214287469444010,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c2321700-98ca-4fb6-82af-408086cad702,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T00:51:27.151556606Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8c9063fe62de5995ec
787434cc30364ade60d57ed29e2a0ca9197a8ad5b33425,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-5pktj,Uid:f3d59e24-fa87-4a81-a526-dd3281cc933f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722214200619710688,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-5pktj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d59e24-fa87-4a81-a526-dd3281cc933f,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T00:49:59.976804751Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e10f3dcd91eed412142efed9b886a39ea8fd253ee07d5f98f59adab09704ec6b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:52e2a3d2-506b-440e-b1e3-485de0fe81e5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722214199627866919,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernet
es.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e2a3d2-506b-440e-b1e3-485de0fe81e5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T00:49:59.301096724Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:0b021da77f062e78b72303aa3f380b3eedfde35a79146bc2e572d4a0dd4f7363,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-sglhh,Uid:3b1ee481-ea1f-4fd0-8b99-531a84047e07,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722214195310778206,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-sglhh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b1ee481-ea1f-4fd0-8b99-531a84047e07,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T00:49:54.994247670Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:750fec69cfa8773120254d9d275b455e4b3a8f7e7f8a6defcd9ac68dc92385ff,Metadata:&PodSandboxMetadata{Name:kube-proxy-kvp86,Uid:5c3ed19f-0d2a-46bd-89eb-d31fa88a3ea0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722214195146824697,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubern
etes.pod.name: kube-proxy-kvp86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3ed19f-0d2a-46bd-89eb-d31fa88a3ea0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T00:49:54.831636517Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3f36a2408aceaabeb96ec3f00d2d37bd40ce1860d24c57725e757501a1f5fbb0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-657805,Uid:b6d4aab67e7f7f6474899aba0076081c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722214175659855842,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d4aab67e7f7f6474899aba0076081c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b6d4aab67e7f7f6474899aba0076081c,kubernetes.io/config.seen: 2024-07-29T00:49:35.164874931Z,kubernetes.io/config.source: fil
e,},RuntimeHandler:,},&PodSandbox{Id:63690cadeb68bf946669f7f55646382345e48e4c54426ea3792ac417895d159a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-657805,Uid:362308c5f6f5ccd5eeb5a4c232a54105,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722214175646432574,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362308c5f6f5ccd5eeb5a4c232a54105,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.18:8443,kubernetes.io/config.hash: 362308c5f6f5ccd5eeb5a4c232a54105,kubernetes.io/config.seen: 2024-07-29T00:49:35.164873021Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ea5272a1942103b034e147d6925aa1c232e9aa45c7390e06599c5d1c4fb4a2e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-657805,Uid:58a344d0d12b0547d54b3f03ac2afd2e,Namesp
ace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722214175623802337,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58a344d0d12b0547d54b3f03ac2afd2e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 58a344d0d12b0547d54b3f03ac2afd2e,kubernetes.io/config.seen: 2024-07-29T00:49:35.164874136Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9987febd0df8ec4413f510c772d1a2221aef49aee25ae509fb39843517ed1f50,Metadata:&PodSandboxMetadata{Name:etcd-addons-657805,Uid:7f5ea8da4ad09fe4ac784bd8378d5702,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722214175621797477,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5ea8da4ad09fe4ac784bd8378d5702,
tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.18:2379,kubernetes.io/config.hash: 7f5ea8da4ad09fe4ac784bd8378d5702,kubernetes.io/config.seen: 2024-07-29T00:49:35.164868923Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=095f582f-637c-466a-b389-bc3bcc406072 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.013031619Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c754c4b3-34ec-4eac-bfe2-6de44383dd97 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.013102464Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c754c4b3-34ec-4eac-bfe2-6de44383dd97 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.013385690Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8da8b71c903c723f54ed22dd69ce83e634302237cfae0bc7c48c99b938a1a4ed,PodSandboxId:bc6ffcaf0af239d8d14f78e351a39414fd51893409b017d1671c33e37bd2e7a2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722214463368257518,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-srwb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f8c5130-5429-4ba4-b0bc-d64604463eea,},Annotations:map[string]string{io.kubernetes.container.hash: 507698f7,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f979ed0d59ceb0a3fe77e8a588fbdc216b780f146296362aee81474baf8b7b,PodSandboxId:21a7602a7970038680b9100c741077a59c15c08952ac1aa1e531ec0f3b591df4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722214323785218341,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ccff4b-e3a4-41d4-bd1e-f88c2e6fb79c,},Annotations:map[string]string{io.kubernet
es.container.hash: 565a93f8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da001d6bccbdef17d498eac5a7a0a1ba32eb0f73114e28c343fe3978772f304e,PodSandboxId:6ef848877b3decc1e1a0be43f7bd078e9cf623b71b4fb933ba98bfa7398e213e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722214290834977914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c2321700-98ca-4fb6-8
2af-408086cad702,},Annotations:map[string]string{io.kubernetes.container.hash: 45ecdcff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8c616404fea7e5342f50b9e6045edaa77cc2c28a38474865a7ed3c3f794138,PodSandboxId:8c9063fe62de5995ec787434cc30364ade60d57ed29e2a0ca9197a8ad5b33425,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722214234127740443,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5pktj,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: f3d59e24-fa87-4a81-a526-dd3281cc933f,},Annotations:map[string]string{io.kubernetes.container.hash: cece9cea,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e58106e1d28009e5317c0c8b0c0511dbc63cfc12df5326a4fc50e59342362f7,PodSandboxId:e10f3dcd91eed412142efed9b886a39ea8fd253ee07d5f98f59adab09704ec6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722214200906437831,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e2a3d2-506b-440e-b1e3-485de0fe81e5,},Annotations:map[string]string{io.kubernetes.container.hash: 89fbe4b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da92967e70eba7d7e1043f0dc9ca2c2df5b5218f209a76240b62e3d6fac7526b,PodSandboxId:0b021da77f062e78b72303aa3f380b3eedfde35a79146bc2e572d4a0dd4f7363,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722214198348601832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-sglhh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b1ee481-ea1f-4fd0-8b99-531a84047e07,},Annotations:map[string]string{io.kubernetes.container.hash: 84f9508e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef538b61c48a86bae81638d1752ecce8c820d316d178e6e52f8faf3a7d15e245,PodSandboxId:750fec69cfa8773120254d9d275b455e4b3a8f7e7f8a6defcd9ac68dc92385ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722214195736266949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kvp86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3ed19f-0d2a-46bd-89eb-d31fa88a3ea0,},Annotations:map[string]string{io.kubernetes.container.hash: dd20b3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebe4fbe2afb49ca9058feea4070393aa1fa31206b6877a61b6a9f8184d40346f,PodSandboxId:3f36a2408aceaabeb96ec3f00d2d37bd40ce1860d24c57725e757501a1f5fbb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e
5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722214175872105607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d4aab67e7f7f6474899aba0076081c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219ee84cb547903968b7a45cad9827f7a37d3dbbbcb50dfd16d456392c1aea67,PodSandboxId:9987febd0df8ec4413f510c772d1a2221aef49aee25ae509fb39843517ed1f50,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1722214175855725684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5ea8da4ad09fe4ac784bd8378d5702,},Annotations:map[string]string{io.kubernetes.container.hash: 2326aee3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ba32aabad2a7c3ecaeadb54b5c6c29a332c87a5d9a00cc327d2a74154f1dde,PodSandboxId:63690cadeb68bf946669f7f55646382345e48e4c54426ea3792ac417895d159a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:17222141758606
49173,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362308c5f6f5ccd5eeb5a4c232a54105,},Annotations:map[string]string{io.kubernetes.container.hash: fe954fd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb14a7eb5f1001bbd63d7b21d415b3af85d03ab4f5282396b474724e9d69b206,PodSandboxId:0ea5272a1942103b034e147d6925aa1c232e9aa45c7390e06599c5d1c4fb4a2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722214175785411192,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58a344d0d12b0547d54b3f03ac2afd2e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c754c4b3-34ec-4eac-bfe2-6de44383dd97 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.022444876Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b2312fe-1b9e-4591-a8c0-46c40b1f6f8f name=/runtime.v1.RuntimeService/Version
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.022515195Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b2312fe-1b9e-4591-a8c0-46c40b1f6f8f name=/runtime.v1.RuntimeService/Version
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.024214098Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25754981-4eee-465f-a02d-3a7ad66fe224 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.025706721Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722214588025670148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25754981-4eee-465f-a02d-3a7ad66fe224 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.026234403Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33d91e74-f22d-418e-b491-c3aaac05bae3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.026306230Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33d91e74-f22d-418e-b491-c3aaac05bae3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.026676966Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8da8b71c903c723f54ed22dd69ce83e634302237cfae0bc7c48c99b938a1a4ed,PodSandboxId:bc6ffcaf0af239d8d14f78e351a39414fd51893409b017d1671c33e37bd2e7a2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722214463368257518,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-srwb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f8c5130-5429-4ba4-b0bc-d64604463eea,},Annotations:map[string]string{io.kubernetes.container.hash: 507698f7,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f979ed0d59ceb0a3fe77e8a588fbdc216b780f146296362aee81474baf8b7b,PodSandboxId:21a7602a7970038680b9100c741077a59c15c08952ac1aa1e531ec0f3b591df4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722214323785218341,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ccff4b-e3a4-41d4-bd1e-f88c2e6fb79c,},Annotations:map[string]string{io.kubernet
es.container.hash: 565a93f8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da001d6bccbdef17d498eac5a7a0a1ba32eb0f73114e28c343fe3978772f304e,PodSandboxId:6ef848877b3decc1e1a0be43f7bd078e9cf623b71b4fb933ba98bfa7398e213e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722214290834977914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c2321700-98ca-4fb6-8
2af-408086cad702,},Annotations:map[string]string{io.kubernetes.container.hash: 45ecdcff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8c616404fea7e5342f50b9e6045edaa77cc2c28a38474865a7ed3c3f794138,PodSandboxId:8c9063fe62de5995ec787434cc30364ade60d57ed29e2a0ca9197a8ad5b33425,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722214234127740443,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5pktj,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: f3d59e24-fa87-4a81-a526-dd3281cc933f,},Annotations:map[string]string{io.kubernetes.container.hash: cece9cea,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e58106e1d28009e5317c0c8b0c0511dbc63cfc12df5326a4fc50e59342362f7,PodSandboxId:e10f3dcd91eed412142efed9b886a39ea8fd253ee07d5f98f59adab09704ec6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722214200906437831,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e2a3d2-506b-440e-b1e3-485de0fe81e5,},Annotations:map[string]string{io.kubernetes.container.hash: 89fbe4b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da92967e70eba7d7e1043f0dc9ca2c2df5b5218f209a76240b62e3d6fac7526b,PodSandboxId:0b021da77f062e78b72303aa3f380b3eedfde35a79146bc2e572d4a0dd4f7363,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722214198348601832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-sglhh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b1ee481-ea1f-4fd0-8b99-531a84047e07,},Annotations:map[string]string{io.kubernetes.container.hash: 84f9508e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef538b61c48a86bae81638d1752ecce8c820d316d178e6e52f8faf3a7d15e245,PodSandboxId:750fec69cfa8773120254d9d275b455e4b3a8f7e7f8a6defcd9ac68dc92385ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722214195736266949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kvp86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3ed19f-0d2a-46bd-89eb-d31fa88a3ea0,},Annotations:map[string]string{io.kubernetes.container.hash: dd20b3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebe4fbe2afb49ca9058feea4070393aa1fa31206b6877a61b6a9f8184d40346f,PodSandboxId:3f36a2408aceaabeb96ec3f00d2d37bd40ce1860d24c57725e757501a1f5fbb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e
5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722214175872105607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d4aab67e7f7f6474899aba0076081c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219ee84cb547903968b7a45cad9827f7a37d3dbbbcb50dfd16d456392c1aea67,PodSandboxId:9987febd0df8ec4413f510c772d1a2221aef49aee25ae509fb39843517ed1f50,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1722214175855725684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5ea8da4ad09fe4ac784bd8378d5702,},Annotations:map[string]string{io.kubernetes.container.hash: 2326aee3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ba32aabad2a7c3ecaeadb54b5c6c29a332c87a5d9a00cc327d2a74154f1dde,PodSandboxId:63690cadeb68bf946669f7f55646382345e48e4c54426ea3792ac417895d159a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:17222141758606
49173,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362308c5f6f5ccd5eeb5a4c232a54105,},Annotations:map[string]string{io.kubernetes.container.hash: fe954fd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb14a7eb5f1001bbd63d7b21d415b3af85d03ab4f5282396b474724e9d69b206,PodSandboxId:0ea5272a1942103b034e147d6925aa1c232e9aa45c7390e06599c5d1c4fb4a2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722214175785411192,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58a344d0d12b0547d54b3f03ac2afd2e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33d91e74-f22d-418e-b491-c3aaac05bae3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.059749680Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=af01fae7-e876-4c3e-8528-fe202fdaa136 name=/runtime.v1.RuntimeService/Version
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.059832633Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=af01fae7-e876-4c3e-8528-fe202fdaa136 name=/runtime.v1.RuntimeService/Version
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.061013927Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea93e325-6845-4b23-83d7-2b29424ce5e7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.062532246Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722214588062506831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea93e325-6845-4b23-83d7-2b29424ce5e7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.063059668Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=733a6f31-e35c-4533-9e48-28eadc5fb020 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.063130712Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=733a6f31-e35c-4533-9e48-28eadc5fb020 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 00:56:28 addons-657805 crio[683]: time="2024-07-29 00:56:28.063565337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8da8b71c903c723f54ed22dd69ce83e634302237cfae0bc7c48c99b938a1a4ed,PodSandboxId:bc6ffcaf0af239d8d14f78e351a39414fd51893409b017d1671c33e37bd2e7a2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722214463368257518,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-srwb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f8c5130-5429-4ba4-b0bc-d64604463eea,},Annotations:map[string]string{io.kubernetes.container.hash: 507698f7,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f979ed0d59ceb0a3fe77e8a588fbdc216b780f146296362aee81474baf8b7b,PodSandboxId:21a7602a7970038680b9100c741077a59c15c08952ac1aa1e531ec0f3b591df4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722214323785218341,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 29ccff4b-e3a4-41d4-bd1e-f88c2e6fb79c,},Annotations:map[string]string{io.kubernet
es.container.hash: 565a93f8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da001d6bccbdef17d498eac5a7a0a1ba32eb0f73114e28c343fe3978772f304e,PodSandboxId:6ef848877b3decc1e1a0be43f7bd078e9cf623b71b4fb933ba98bfa7398e213e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722214290834977914,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c2321700-98ca-4fb6-8
2af-408086cad702,},Annotations:map[string]string{io.kubernetes.container.hash: 45ecdcff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8c616404fea7e5342f50b9e6045edaa77cc2c28a38474865a7ed3c3f794138,PodSandboxId:8c9063fe62de5995ec787434cc30364ade60d57ed29e2a0ca9197a8ad5b33425,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722214234127740443,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5pktj,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: f3d59e24-fa87-4a81-a526-dd3281cc933f,},Annotations:map[string]string{io.kubernetes.container.hash: cece9cea,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e58106e1d28009e5317c0c8b0c0511dbc63cfc12df5326a4fc50e59342362f7,PodSandboxId:e10f3dcd91eed412142efed9b886a39ea8fd253ee07d5f98f59adab09704ec6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722214200906437831,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52e2a3d2-506b-440e-b1e3-485de0fe81e5,},Annotations:map[string]string{io.kubernetes.container.hash: 89fbe4b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da92967e70eba7d7e1043f0dc9ca2c2df5b5218f209a76240b62e3d6fac7526b,PodSandboxId:0b021da77f062e78b72303aa3f380b3eedfde35a79146bc2e572d4a0dd4f7363,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722214198348601832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-sglhh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b1ee481-ea1f-4fd0-8b99-531a84047e07,},Annotations:map[string]string{io.kubernetes.container.hash: 84f9508e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef538b61c48a86bae81638d1752ecce8c820d316d178e6e52f8faf3a7d15e245,PodSandboxId:750fec69cfa8773120254d9d275b455e4b3a8f7e7f8a6defcd9ac68dc92385ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722214195736266949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kvp86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3ed19f-0d2a-46bd-89eb-d31fa88a3ea0,},Annotations:map[string]string{io.kubernetes.container.hash: dd20b3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebe4fbe2afb49ca9058feea4070393aa1fa31206b6877a61b6a9f8184d40346f,PodSandboxId:3f36a2408aceaabeb96ec3f00d2d37bd40ce1860d24c57725e757501a1f5fbb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e
5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722214175872105607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6d4aab67e7f7f6474899aba0076081c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:219ee84cb547903968b7a45cad9827f7a37d3dbbbcb50dfd16d456392c1aea67,PodSandboxId:9987febd0df8ec4413f510c772d1a2221aef49aee25ae509fb39843517ed1f50,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1722214175855725684,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5ea8da4ad09fe4ac784bd8378d5702,},Annotations:map[string]string{io.kubernetes.container.hash: 2326aee3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ba32aabad2a7c3ecaeadb54b5c6c29a332c87a5d9a00cc327d2a74154f1dde,PodSandboxId:63690cadeb68bf946669f7f55646382345e48e4c54426ea3792ac417895d159a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:17222141758606
49173,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362308c5f6f5ccd5eeb5a4c232a54105,},Annotations:map[string]string{io.kubernetes.container.hash: fe954fd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb14a7eb5f1001bbd63d7b21d415b3af85d03ab4f5282396b474724e9d69b206,PodSandboxId:0ea5272a1942103b034e147d6925aa1c232e9aa45c7390e06599c5d1c4fb4a2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722214175785411192,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-657805,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58a344d0d12b0547d54b3f03ac2afd2e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=733a6f31-e35c-4533-9e48-28eadc5fb020 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8da8b71c903c7       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   bc6ffcaf0af23       hello-world-app-6778b5fc9f-srwb4
	55f979ed0d59c       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         4 minutes ago       Running             nginx                     0                   21a7602a79700       nginx
	da001d6bccbde       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     4 minutes ago       Running             busybox                   0                   6ef848877b3de       busybox
	fd8c616404fea       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   5 minutes ago       Running             metrics-server            0                   8c9063fe62de5       metrics-server-c59844bb4-5pktj
	1e58106e1d280       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        6 minutes ago       Running             storage-provisioner       0                   e10f3dcd91eed       storage-provisioner
	da92967e70eba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        6 minutes ago       Running             coredns                   0                   0b021da77f062       coredns-7db6d8ff4d-sglhh
	ef538b61c48a8       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        6 minutes ago       Running             kube-proxy                0                   750fec69cfa87       kube-proxy-kvp86
	ebe4fbe2afb49       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        6 minutes ago       Running             kube-scheduler            0                   3f36a2408acea       kube-scheduler-addons-657805
	56ba32aabad2a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        6 minutes ago       Running             kube-apiserver            0                   63690cadeb68b       kube-apiserver-addons-657805
	219ee84cb5479       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        6 minutes ago       Running             etcd                      0                   9987febd0df8e       etcd-addons-657805
	cb14a7eb5f100       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        6 minutes ago       Running             kube-controller-manager   0                   0ea5272a19421       kube-controller-manager-addons-657805
	
	
	==> coredns [da92967e70eba7d7e1043f0dc9ca2c2df5b5218f209a76240b62e3d6fac7526b] <==
	[INFO] 10.244.0.7:55876 - 59019 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000105277s
	[INFO] 10.244.0.7:58036 - 56772 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126617s
	[INFO] 10.244.0.7:58036 - 1986 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085045s
	[INFO] 10.244.0.7:51168 - 54477 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000139345s
	[INFO] 10.244.0.7:51168 - 52687 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079759s
	[INFO] 10.244.0.7:35664 - 19161 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000090404s
	[INFO] 10.244.0.7:35664 - 57048 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000091894s
	[INFO] 10.244.0.7:35265 - 64461 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000142386s
	[INFO] 10.244.0.7:35265 - 59336 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000074863s
	[INFO] 10.244.0.7:57121 - 61683 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000068608s
	[INFO] 10.244.0.7:57121 - 14833 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000080524s
	[INFO] 10.244.0.7:54405 - 19529 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00005349s
	[INFO] 10.244.0.7:54405 - 39240 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00024391s
	[INFO] 10.244.0.7:59019 - 63328 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000056187s
	[INFO] 10.244.0.7:59019 - 22558 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110906s
	[INFO] 10.244.0.22:45945 - 22370 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000486413s
	[INFO] 10.244.0.22:33955 - 7214 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000074675s
	[INFO] 10.244.0.22:36185 - 63378 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000269409s
	[INFO] 10.244.0.22:35608 - 62867 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155662s
	[INFO] 10.244.0.22:57123 - 37617 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109529s
	[INFO] 10.244.0.22:58981 - 21344 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127524s
	[INFO] 10.244.0.22:40430 - 12022 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001347922s
	[INFO] 10.244.0.22:47662 - 23434 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001088439s
	[INFO] 10.244.0.24:45684 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000273223s
	[INFO] 10.244.0.24:33956 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000183503s
	
	
	==> describe nodes <==
	Name:               addons-657805
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-657805
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=addons-657805
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T00_49_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-657805
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 00:49:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-657805
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 00:56:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 00:54:47 +0000   Mon, 29 Jul 2024 00:49:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 00:54:47 +0000   Mon, 29 Jul 2024 00:49:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 00:54:47 +0000   Mon, 29 Jul 2024 00:49:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 00:54:47 +0000   Mon, 29 Jul 2024 00:49:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    addons-657805
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e47ebefea744fb299de58a1d88e126a
	  System UUID:                1e47ebef-ea74-4fb2-99de-58a1d88e126a
	  Boot ID:                    b952f0ff-9332-441c-81d7-1e7f5d3c3cc6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  default                     hello-world-app-6778b5fc9f-srwb4         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 coredns-7db6d8ff4d-sglhh                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     6m34s
	  kube-system                 etcd-addons-657805                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m47s
	  kube-system                 kube-apiserver-addons-657805             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m48s
	  kube-system                 kube-controller-manager-addons-657805    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	  kube-system                 kube-proxy-kvp86                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	  kube-system                 kube-scheduler-addons-657805             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	  kube-system                 metrics-server-c59844bb4-5pktj           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         6m29s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m53s (x8 over 6m53s)  kubelet          Node addons-657805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m53s (x8 over 6m53s)  kubelet          Node addons-657805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m53s (x7 over 6m53s)  kubelet          Node addons-657805 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m47s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m47s                  kubelet          Node addons-657805 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m47s                  kubelet          Node addons-657805 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m47s                  kubelet          Node addons-657805 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m46s                  kubelet          Node addons-657805 status is now: NodeReady
	  Normal  RegisteredNode           6m35s                  node-controller  Node addons-657805 event: Registered Node addons-657805 in Controller
	
	
	==> dmesg <==
	[  +0.155269] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.021462] kauditd_printk_skb: 104 callbacks suppressed
	[Jul29 00:50] kauditd_printk_skb: 117 callbacks suppressed
	[  +6.716185] kauditd_printk_skb: 103 callbacks suppressed
	[ +22.326278] kauditd_printk_skb: 4 callbacks suppressed
	[ +20.601294] kauditd_printk_skb: 27 callbacks suppressed
	[Jul29 00:51] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.026049] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.273740] kauditd_printk_skb: 82 callbacks suppressed
	[  +5.816421] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.060110] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.005666] kauditd_printk_skb: 50 callbacks suppressed
	[ +23.974926] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.348244] kauditd_printk_skb: 4 callbacks suppressed
	[Jul29 00:52] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.398639] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.061088] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.129848] kauditd_printk_skb: 35 callbacks suppressed
	[Jul29 00:53] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.506236] kauditd_printk_skb: 45 callbacks suppressed
	[  +6.043604] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.012015] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.474495] kauditd_printk_skb: 16 callbacks suppressed
	[Jul29 00:54] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.038552] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [219ee84cb547903968b7a45cad9827f7a37d3dbbbcb50dfd16d456392c1aea67] <==
	{"level":"warn","ts":"2024-07-29T00:50:47.957836Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.223473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85382"}
	{"level":"info","ts":"2024-07-29T00:50:47.957879Z","caller":"traceutil/trace.go:171","msg":"trace[223413755] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:989; }","duration":"173.291667ms","start":"2024-07-29T00:50:47.784579Z","end":"2024-07-29T00:50:47.957871Z","steps":["trace[223413755] 'range keys from in-memory index tree'  (duration: 173.059669ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T00:50:54.080162Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.255432ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14071"}
	{"level":"info","ts":"2024-07-29T00:50:54.080274Z","caller":"traceutil/trace.go:171","msg":"trace[1809084110] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1008; }","duration":"171.445393ms","start":"2024-07-29T00:50:53.908805Z","end":"2024-07-29T00:50:54.080251Z","steps":["trace[1809084110] 'range keys from in-memory index tree'  (duration: 171.141827ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T00:50:54.08038Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.095073ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11161"}
	{"level":"info","ts":"2024-07-29T00:50:54.080406Z","caller":"traceutil/trace.go:171","msg":"trace[895416400] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1008; }","duration":"129.139702ms","start":"2024-07-29T00:50:53.951259Z","end":"2024-07-29T00:50:54.080398Z","steps":["trace[895416400] 'range keys from in-memory index tree'  (duration: 128.921314ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T00:50:54.080222Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.02907ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85439"}
	{"level":"info","ts":"2024-07-29T00:50:54.080452Z","caller":"traceutil/trace.go:171","msg":"trace[1483522574] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1008; }","duration":"295.286229ms","start":"2024-07-29T00:50:53.78516Z","end":"2024-07-29T00:50:54.080446Z","steps":["trace[1483522574] 'range keys from in-memory index tree'  (duration: 294.843109ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T00:51:06.677243Z","caller":"traceutil/trace.go:171","msg":"trace[1170433266] transaction","detail":"{read_only:false; response_revision:1078; number_of_response:1; }","duration":"211.742091ms","start":"2024-07-29T00:51:06.465482Z","end":"2024-07-29T00:51:06.677225Z","steps":["trace[1170433266] 'process raft request'  (duration: 211.574154ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T00:51:13.884671Z","caller":"traceutil/trace.go:171","msg":"trace[648145901] linearizableReadLoop","detail":"{readStateIndex:1172; appliedIndex:1171; }","duration":"102.102891ms","start":"2024-07-29T00:51:13.782553Z","end":"2024-07-29T00:51:13.884656Z","steps":["trace[648145901] 'read index received'  (duration: 101.951568ms)","trace[648145901] 'applied index is now lower than readState.Index'  (duration: 150.647µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T00:51:13.884887Z","caller":"traceutil/trace.go:171","msg":"trace[1463469719] transaction","detail":"{read_only:false; response_revision:1138; number_of_response:1; }","duration":"183.976845ms","start":"2024-07-29T00:51:13.700896Z","end":"2024-07-29T00:51:13.884872Z","steps":["trace[1463469719] 'process raft request'  (duration: 183.652776ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T00:51:13.885036Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.463151ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85581"}
	{"level":"info","ts":"2024-07-29T00:51:13.885103Z","caller":"traceutil/trace.go:171","msg":"trace[1965017160] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1138; }","duration":"102.565024ms","start":"2024-07-29T00:51:13.782529Z","end":"2024-07-29T00:51:13.885094Z","steps":["trace[1965017160] 'agreement among raft nodes before linearized reading'  (duration: 102.280834ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T00:51:21.923558Z","caller":"traceutil/trace.go:171","msg":"trace[985096048] transaction","detail":"{read_only:false; response_revision:1169; number_of_response:1; }","duration":"151.443065ms","start":"2024-07-29T00:51:21.772093Z","end":"2024-07-29T00:51:21.923536Z","steps":["trace[985096048] 'process raft request'  (duration: 150.646727ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T00:51:21.924458Z","caller":"traceutil/trace.go:171","msg":"trace[621982709] linearizableReadLoop","detail":"{readStateIndex:1205; appliedIndex:1204; }","duration":"142.542934ms","start":"2024-07-29T00:51:21.781902Z","end":"2024-07-29T00:51:21.924445Z","steps":["trace[621982709] 'read index received'  (duration: 140.14895ms)","trace[621982709] 'applied index is now lower than readState.Index'  (duration: 2.391752ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T00:51:21.924753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.836174ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85581"}
	{"level":"info","ts":"2024-07-29T00:51:21.925367Z","caller":"traceutil/trace.go:171","msg":"trace[408582026] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1169; }","duration":"143.47424ms","start":"2024-07-29T00:51:21.781881Z","end":"2024-07-29T00:51:21.925355Z","steps":["trace[408582026] 'agreement among raft nodes before linearized reading'  (duration: 142.689863ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T00:51:26.289861Z","caller":"traceutil/trace.go:171","msg":"trace[795199920] transaction","detail":"{read_only:false; response_revision:1199; number_of_response:1; }","duration":"337.382057ms","start":"2024-07-29T00:51:25.952459Z","end":"2024-07-29T00:51:26.289841Z","steps":["trace[795199920] 'process raft request'  (duration: 336.811052ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T00:51:26.290205Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T00:51:25.952444Z","time spent":"337.642819ms","remote":"127.0.0.1:39426","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1192 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-29T00:53:00.348223Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.85875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/nvidia-device-plugin-daemonset-q9787.17e688b00a6169eb\" ","response":"range_response_count:1 size:859"}
	{"level":"info","ts":"2024-07-29T00:53:00.348418Z","caller":"traceutil/trace.go:171","msg":"trace[774637368] range","detail":"{range_begin:/registry/events/kube-system/nvidia-device-plugin-daemonset-q9787.17e688b00a6169eb; range_end:; response_count:1; response_revision:1675; }","duration":"170.150272ms","start":"2024-07-29T00:53:00.178245Z","end":"2024-07-29T00:53:00.348395Z","steps":["trace[774637368] 'agreement among raft nodes before linearized reading'  (duration: 169.806733ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T00:53:00.348901Z","caller":"traceutil/trace.go:171","msg":"trace[1667996954] linearizableReadLoop","detail":"{readStateIndex:1741; appliedIndex:1740; }","duration":"169.707876ms","start":"2024-07-29T00:53:00.178276Z","end":"2024-07-29T00:53:00.347984Z","steps":["trace[1667996954] 'read index received'  (duration: 169.194164ms)","trace[1667996954] 'applied index is now lower than readState.Index'  (duration: 512.613µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T00:53:22.608961Z","caller":"traceutil/trace.go:171","msg":"trace[1554804269] transaction","detail":"{read_only:false; response_revision:1889; number_of_response:1; }","duration":"298.775975ms","start":"2024-07-29T00:53:22.310104Z","end":"2024-07-29T00:53:22.60888Z","steps":["trace[1554804269] 'process raft request'  (duration: 298.531695ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T00:53:53.263631Z","caller":"traceutil/trace.go:171","msg":"trace[1140891849] transaction","detail":"{read_only:false; response_revision:1985; number_of_response:1; }","duration":"161.993518ms","start":"2024-07-29T00:53:53.101621Z","end":"2024-07-29T00:53:53.263614Z","steps":["trace[1140891849] 'process raft request'  (duration: 161.654901ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T00:53:59.407457Z","caller":"traceutil/trace.go:171","msg":"trace[1269141747] transaction","detail":"{read_only:false; response_revision:1991; number_of_response:1; }","duration":"119.68115ms","start":"2024-07-29T00:53:59.28776Z","end":"2024-07-29T00:53:59.407441Z","steps":["trace[1269141747] 'process raft request'  (duration: 119.362093ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:56:28 up 7 min,  0 users,  load average: 0.15, 0.76, 0.51
	Linux addons-657805 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [56ba32aabad2a7c3ecaeadb54b5c6c29a332c87a5d9a00cc327d2a74154f1dde] <==
	E0729 00:51:40.644899       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.89.44:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.89.44:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.89.44:443: connect: connection refused
	E0729 00:51:40.649947       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.89.44:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.89.44:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.89.44:443: connect: connection refused
	I0729 00:51:40.761717       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0729 00:51:51.456750       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0729 00:51:52.481996       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0729 00:51:57.101858       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0729 00:51:57.297628       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.59.176"}
	E0729 00:52:31.022868       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0729 00:52:32.261614       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0729 00:53:09.386173       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 00:53:09.386211       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 00:53:09.411553       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 00:53:09.411653       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 00:53:09.442944       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 00:53:09.443731       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 00:53:09.464007       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 00:53:09.464107       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 00:53:09.489278       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 00:53:09.489627       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0729 00:53:10.443912       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0729 00:53:10.489776       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0729 00:53:10.507708       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0729 00:53:16.746637       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.11.83"}
	I0729 00:54:20.465525       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.142.52"}
	E0729 00:54:22.280393       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [cb14a7eb5f1001bbd63d7b21d415b3af85d03ab4f5282396b474724e9d69b206] <==
	E0729 00:54:29.175709       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 00:54:31.345949       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:54:31.346154       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 00:54:31.672016       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:54:31.672069       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0729 00:54:32.217827       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0729 00:55:05.725456       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:55:05.725633       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 00:55:07.402386       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:55:07.402443       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 00:55:19.902575       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:55:19.902834       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 00:55:31.264939       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:55:31.264978       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 00:55:42.267399       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:55:42.267594       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 00:56:00.169758       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:56:00.169855       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 00:56:07.156660       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:56:07.156747       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 00:56:14.097621       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:56:14.097672       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 00:56:16.050233       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 00:56:16.050486       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0729 00:56:27.049651       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="13.154µs"
	
	
	==> kube-proxy [ef538b61c48a86bae81638d1752ecce8c820d316d178e6e52f8faf3a7d15e245] <==
	I0729 00:49:56.622720       1 server_linux.go:69] "Using iptables proxy"
	I0729 00:49:56.643764       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.18"]
	I0729 00:49:56.779490       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 00:49:56.779543       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 00:49:56.779561       1 server_linux.go:165] "Using iptables Proxier"
	I0729 00:49:56.783906       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 00:49:56.784134       1 server.go:872] "Version info" version="v1.30.3"
	I0729 00:49:56.784163       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 00:49:56.785944       1 config.go:192] "Starting service config controller"
	I0729 00:49:56.785976       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 00:49:56.786000       1 config.go:101] "Starting endpoint slice config controller"
	I0729 00:49:56.786004       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 00:49:56.786523       1 config.go:319] "Starting node config controller"
	I0729 00:49:56.786550       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 00:49:56.887399       1 shared_informer.go:320] Caches are synced for node config
	I0729 00:49:56.887441       1 shared_informer.go:320] Caches are synced for service config
	I0729 00:49:56.887461       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ebe4fbe2afb49ca9058feea4070393aa1fa31206b6877a61b6a9f8184d40346f] <==
	E0729 00:49:38.498005       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 00:49:38.497989       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 00:49:38.498081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 00:49:38.498225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 00:49:38.498221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 00:49:38.498275       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 00:49:39.434115       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 00:49:39.434160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 00:49:39.542681       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 00:49:39.542730       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 00:49:39.640735       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 00:49:39.640778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 00:49:39.644819       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 00:49:39.644861       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 00:49:39.644831       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 00:49:39.644882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 00:49:39.701594       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 00:49:39.701775       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 00:49:39.705765       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 00:49:39.705846       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 00:49:39.767292       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 00:49:39.767472       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 00:49:39.822818       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 00:49:39.822914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0729 00:49:42.392445       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 00:54:25 addons-657805 kubelet[1271]: I0729 00:54:25.569787    1271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a82f0a589f5e93521649752e248509c64743400454996e9884b7b6ed6bd24b60"} err="failed to get container status \"a82f0a589f5e93521649752e248509c64743400454996e9884b7b6ed6bd24b60\": rpc error: code = NotFound desc = could not find container \"a82f0a589f5e93521649752e248509c64743400454996e9884b7b6ed6bd24b60\": container with ID starting with a82f0a589f5e93521649752e248509c64743400454996e9884b7b6ed6bd24b60 not found: ID does not exist"
	Jul 29 00:54:25 addons-657805 kubelet[1271]: I0729 00:54:25.569896    1271 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-l65gn\" (UniqueName: \"kubernetes.io/projected/88468d92-3a78-414c-a2bf-a04b6bc1c176-kube-api-access-l65gn\") on node \"addons-657805\" DevicePath \"\""
	Jul 29 00:54:25 addons-657805 kubelet[1271]: I0729 00:54:25.569906    1271 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88468d92-3a78-414c-a2bf-a04b6bc1c176-webhook-cert\") on node \"addons-657805\" DevicePath \"\""
	Jul 29 00:54:27 addons-657805 kubelet[1271]: I0729 00:54:27.180522    1271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88468d92-3a78-414c-a2bf-a04b6bc1c176" path="/var/lib/kubelet/pods/88468d92-3a78-414c-a2bf-a04b6bc1c176/volumes"
	Jul 29 00:54:41 addons-657805 kubelet[1271]: E0729 00:54:41.200743    1271 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 00:54:41 addons-657805 kubelet[1271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 00:54:41 addons-657805 kubelet[1271]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 00:54:41 addons-657805 kubelet[1271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 00:54:41 addons-657805 kubelet[1271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 00:54:41 addons-657805 kubelet[1271]: I0729 00:54:41.627560    1271 scope.go:117] "RemoveContainer" containerID="aa55d39ae5e291484ac4f1c33579c2e91f8d2ae625b528e694db118544bbbf83"
	Jul 29 00:54:41 addons-657805 kubelet[1271]: I0729 00:54:41.651684    1271 scope.go:117] "RemoveContainer" containerID="8ff01203fa88086556b450df255a200917869e52399738dc6535f2623640184e"
	Jul 29 00:54:50 addons-657805 kubelet[1271]: I0729 00:54:50.176225    1271 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 00:55:41 addons-657805 kubelet[1271]: E0729 00:55:41.202059    1271 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 00:55:41 addons-657805 kubelet[1271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 00:55:41 addons-657805 kubelet[1271]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 00:55:41 addons-657805 kubelet[1271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 00:55:41 addons-657805 kubelet[1271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 00:56:06 addons-657805 kubelet[1271]: I0729 00:56:06.176530    1271 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 00:56:27 addons-657805 kubelet[1271]: I0729 00:56:27.071938    1271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-srwb4" podStartSLOduration=124.579433961 podStartE2EDuration="2m7.071883268s" podCreationTimestamp="2024-07-29 00:54:20 +0000 UTC" firstStartedPulling="2024-07-29 00:54:20.866031804 +0000 UTC m=+279.852157280" lastFinishedPulling="2024-07-29 00:54:23.358481102 +0000 UTC m=+282.344606587" observedRunningTime="2024-07-29 00:54:23.531020881 +0000 UTC m=+282.517146377" watchObservedRunningTime="2024-07-29 00:56:27.071883268 +0000 UTC m=+406.058008753"
	Jul 29 00:56:28 addons-657805 kubelet[1271]: I0729 00:56:28.470757    1271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jks9s\" (UniqueName: \"kubernetes.io/projected/f3d59e24-fa87-4a81-a526-dd3281cc933f-kube-api-access-jks9s\") pod \"f3d59e24-fa87-4a81-a526-dd3281cc933f\" (UID: \"f3d59e24-fa87-4a81-a526-dd3281cc933f\") "
	Jul 29 00:56:28 addons-657805 kubelet[1271]: I0729 00:56:28.470824    1271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f3d59e24-fa87-4a81-a526-dd3281cc933f-tmp-dir\") pod \"f3d59e24-fa87-4a81-a526-dd3281cc933f\" (UID: \"f3d59e24-fa87-4a81-a526-dd3281cc933f\") "
	Jul 29 00:56:28 addons-657805 kubelet[1271]: I0729 00:56:28.471280    1271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3d59e24-fa87-4a81-a526-dd3281cc933f-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f3d59e24-fa87-4a81-a526-dd3281cc933f" (UID: "f3d59e24-fa87-4a81-a526-dd3281cc933f"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 29 00:56:28 addons-657805 kubelet[1271]: I0729 00:56:28.482881    1271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3d59e24-fa87-4a81-a526-dd3281cc933f-kube-api-access-jks9s" (OuterVolumeSpecName: "kube-api-access-jks9s") pod "f3d59e24-fa87-4a81-a526-dd3281cc933f" (UID: "f3d59e24-fa87-4a81-a526-dd3281cc933f"). InnerVolumeSpecName "kube-api-access-jks9s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 00:56:28 addons-657805 kubelet[1271]: I0729 00:56:28.571496    1271 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jks9s\" (UniqueName: \"kubernetes.io/projected/f3d59e24-fa87-4a81-a526-dd3281cc933f-kube-api-access-jks9s\") on node \"addons-657805\" DevicePath \"\""
	Jul 29 00:56:28 addons-657805 kubelet[1271]: I0729 00:56:28.571527    1271 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f3d59e24-fa87-4a81-a526-dd3281cc933f-tmp-dir\") on node \"addons-657805\" DevicePath \"\""
	
	
	==> storage-provisioner [1e58106e1d28009e5317c0c8b0c0511dbc63cfc12df5326a4fc50e59342362f7] <==
	I0729 00:50:02.521606       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 00:50:02.558535       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 00:50:02.558604       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 00:50:02.586184       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 00:50:02.586687       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-657805_66e7d17a-5e06-4549-9e4b-393f6ba9cef3!
	I0729 00:50:02.587754       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3cd53403-dd90-4256-99b3-90c443eea919", APIVersion:"v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-657805_66e7d17a-5e06-4549-9e4b-393f6ba9cef3 became leader
	I0729 00:50:02.687077       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-657805_66e7d17a-5e06-4549-9e4b-393f6ba9cef3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-657805 -n addons-657805
helpers_test.go:261: (dbg) Run:  kubectl --context addons-657805 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (283.33s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-657805
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-657805: exit status 82 (2m0.456932037s)

                                                
                                                
-- stdout --
	* Stopping node "addons-657805"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-657805" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-657805
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-657805: exit status 11 (21.65449819s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-657805" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-657805
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-657805: exit status 11 (6.144960267s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-657805" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-657805
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-657805: exit status 11 (6.144389461s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-657805" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 node stop m02 -v=7 --alsologtostderr
E0729 01:08:44.993597   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
E0729 01:10:06.914402   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-845088 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.479894933s)

                                                
                                                
-- stdout --
	* Stopping node "ha-845088-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:08:17.844390   31644 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:08:17.844556   31644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:08:17.844565   31644 out.go:304] Setting ErrFile to fd 2...
	I0729 01:08:17.844570   31644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:08:17.844769   31644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:08:17.844998   31644 mustload.go:65] Loading cluster: ha-845088
	I0729 01:08:17.845385   31644 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:08:17.845406   31644 stop.go:39] StopHost: ha-845088-m02
	I0729 01:08:17.845819   31644 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:08:17.845867   31644 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:08:17.861944   31644 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0729 01:08:17.862426   31644 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:08:17.862999   31644 main.go:141] libmachine: Using API Version  1
	I0729 01:08:17.863024   31644 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:08:17.863408   31644 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:08:17.865909   31644 out.go:177] * Stopping node "ha-845088-m02"  ...
	I0729 01:08:17.867315   31644 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 01:08:17.867343   31644 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:08:17.867561   31644 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 01:08:17.867601   31644 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:08:17.870372   31644 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:08:17.870862   31644 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:08:17.870897   31644 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:08:17.871070   31644 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:08:17.871365   31644 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:08:17.871520   31644 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:08:17.871707   31644 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa Username:docker}
	I0729 01:08:17.959655   31644 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 01:08:18.015874   31644 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 01:08:18.070087   31644 main.go:141] libmachine: Stopping "ha-845088-m02"...
	I0729 01:08:18.070130   31644 main.go:141] libmachine: (ha-845088-m02) Calling .GetState
	I0729 01:08:18.072116   31644 main.go:141] libmachine: (ha-845088-m02) Calling .Stop
	I0729 01:08:18.076025   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 0/120
	I0729 01:08:19.077412   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 1/120
	I0729 01:08:20.079601   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 2/120
	I0729 01:08:21.081593   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 3/120
	I0729 01:08:22.084009   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 4/120
	I0729 01:08:23.085938   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 5/120
	I0729 01:08:24.087999   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 6/120
	I0729 01:08:25.089687   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 7/120
	I0729 01:08:26.091083   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 8/120
	I0729 01:08:27.093100   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 9/120
	I0729 01:08:28.095304   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 10/120
	I0729 01:08:29.097570   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 11/120
	I0729 01:08:30.099542   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 12/120
	I0729 01:08:31.101452   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 13/120
	I0729 01:08:32.102887   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 14/120
	I0729 01:08:33.104885   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 15/120
	I0729 01:08:34.106146   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 16/120
	I0729 01:08:35.108051   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 17/120
	I0729 01:08:36.109471   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 18/120
	I0729 01:08:37.111725   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 19/120
	I0729 01:08:38.114074   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 20/120
	I0729 01:08:39.115507   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 21/120
	I0729 01:08:40.117808   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 22/120
	I0729 01:08:41.119246   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 23/120
	I0729 01:08:42.120734   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 24/120
	I0729 01:08:43.122675   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 25/120
	I0729 01:08:44.124351   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 26/120
	I0729 01:08:45.126845   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 27/120
	I0729 01:08:46.128330   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 28/120
	I0729 01:08:47.129842   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 29/120
	I0729 01:08:48.132105   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 30/120
	I0729 01:08:49.133386   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 31/120
	I0729 01:08:50.134563   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 32/120
	I0729 01:08:51.136466   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 33/120
	I0729 01:08:52.138022   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 34/120
	I0729 01:08:53.139886   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 35/120
	I0729 01:08:54.141551   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 36/120
	I0729 01:08:55.143703   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 37/120
	I0729 01:08:56.145091   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 38/120
	I0729 01:08:57.147432   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 39/120
	I0729 01:08:58.149317   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 40/120
	I0729 01:08:59.150447   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 41/120
	I0729 01:09:00.151715   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 42/120
	I0729 01:09:01.153131   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 43/120
	I0729 01:09:02.154901   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 44/120
	I0729 01:09:03.157009   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 45/120
	I0729 01:09:04.159205   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 46/120
	I0729 01:09:05.160522   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 47/120
	I0729 01:09:06.162929   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 48/120
	I0729 01:09:07.164312   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 49/120
	I0729 01:09:08.166463   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 50/120
	I0729 01:09:09.169253   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 51/120
	I0729 01:09:10.170559   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 52/120
	I0729 01:09:11.172228   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 53/120
	I0729 01:09:12.173636   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 54/120
	I0729 01:09:13.175451   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 55/120
	I0729 01:09:14.177814   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 56/120
	I0729 01:09:15.179842   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 57/120
	I0729 01:09:16.181790   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 58/120
	I0729 01:09:17.183156   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 59/120
	I0729 01:09:18.185059   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 60/120
	I0729 01:09:19.186537   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 61/120
	I0729 01:09:20.187921   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 62/120
	I0729 01:09:21.189402   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 63/120
	I0729 01:09:22.190886   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 64/120
	I0729 01:09:23.192587   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 65/120
	I0729 01:09:24.194353   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 66/120
	I0729 01:09:25.195640   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 67/120
	I0729 01:09:26.197337   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 68/120
	I0729 01:09:27.198857   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 69/120
	I0729 01:09:28.200833   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 70/120
	I0729 01:09:29.202264   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 71/120
	I0729 01:09:30.203577   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 72/120
	I0729 01:09:31.205596   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 73/120
	I0729 01:09:32.207901   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 74/120
	I0729 01:09:33.209392   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 75/120
	I0729 01:09:34.210686   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 76/120
	I0729 01:09:35.211925   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 77/120
	I0729 01:09:36.213213   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 78/120
	I0729 01:09:37.214427   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 79/120
	I0729 01:09:38.216516   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 80/120
	I0729 01:09:39.217633   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 81/120
	I0729 01:09:40.219069   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 82/120
	I0729 01:09:41.220346   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 83/120
	I0729 01:09:42.221561   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 84/120
	I0729 01:09:43.223071   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 85/120
	I0729 01:09:44.224408   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 86/120
	I0729 01:09:45.225658   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 87/120
	I0729 01:09:46.226983   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 88/120
	I0729 01:09:47.228328   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 89/120
	I0729 01:09:48.229919   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 90/120
	I0729 01:09:49.231933   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 91/120
	I0729 01:09:50.233621   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 92/120
	I0729 01:09:51.235315   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 93/120
	I0729 01:09:52.237629   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 94/120
	I0729 01:09:53.239454   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 95/120
	I0729 01:09:54.241480   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 96/120
	I0729 01:09:55.242644   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 97/120
	I0729 01:09:56.244419   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 98/120
	I0729 01:09:57.245730   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 99/120
	I0729 01:09:58.247921   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 100/120
	I0729 01:09:59.249527   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 101/120
	I0729 01:10:00.250755   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 102/120
	I0729 01:10:01.252443   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 103/120
	I0729 01:10:02.254055   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 104/120
	I0729 01:10:03.255755   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 105/120
	I0729 01:10:04.257733   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 106/120
	I0729 01:10:05.259300   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 107/120
	I0729 01:10:06.261497   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 108/120
	I0729 01:10:07.262718   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 109/120
	I0729 01:10:08.264556   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 110/120
	I0729 01:10:09.265873   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 111/120
	I0729 01:10:10.267269   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 112/120
	I0729 01:10:11.269747   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 113/120
	I0729 01:10:12.271724   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 114/120
	I0729 01:10:13.273851   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 115/120
	I0729 01:10:14.275237   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 116/120
	I0729 01:10:15.277672   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 117/120
	I0729 01:10:16.279043   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 118/120
	I0729 01:10:17.280273   31644 main.go:141] libmachine: (ha-845088-m02) Waiting for machine to stop 119/120
	I0729 01:10:18.281323   31644 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 01:10:18.281520   31644 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-845088 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr: exit status 3 (19.231486675s)

                                                
                                                
-- stdout --
	ha-845088
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-845088-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:10:18.323989   32075 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:10:18.324278   32075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:10:18.324287   32075 out.go:304] Setting ErrFile to fd 2...
	I0729 01:10:18.324291   32075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:10:18.324501   32075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:10:18.324712   32075 out.go:298] Setting JSON to false
	I0729 01:10:18.324738   32075 mustload.go:65] Loading cluster: ha-845088
	I0729 01:10:18.324847   32075 notify.go:220] Checking for updates...
	I0729 01:10:18.325173   32075 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:10:18.325195   32075 status.go:255] checking status of ha-845088 ...
	I0729 01:10:18.325672   32075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:18.325741   32075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:18.343899   32075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37463
	I0729 01:10:18.344364   32075 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:18.345097   32075 main.go:141] libmachine: Using API Version  1
	I0729 01:10:18.345128   32075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:18.345590   32075 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:18.345804   32075 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:10:18.347484   32075 status.go:330] ha-845088 host status = "Running" (err=<nil>)
	I0729 01:10:18.347507   32075 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:10:18.347792   32075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:18.347829   32075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:18.363142   32075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44923
	I0729 01:10:18.363627   32075 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:18.364072   32075 main.go:141] libmachine: Using API Version  1
	I0729 01:10:18.364095   32075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:18.364387   32075 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:18.364551   32075 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:10:18.367598   32075 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:10:18.368109   32075 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:10:18.368140   32075 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:10:18.368273   32075 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:10:18.368547   32075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:18.368585   32075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:18.383182   32075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42643
	I0729 01:10:18.383670   32075 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:18.384176   32075 main.go:141] libmachine: Using API Version  1
	I0729 01:10:18.384193   32075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:18.384514   32075 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:18.384698   32075 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:10:18.384900   32075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:10:18.384927   32075 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:10:18.387667   32075 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:10:18.388066   32075 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:10:18.388106   32075 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:10:18.388217   32075 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:10:18.388436   32075 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:10:18.388602   32075 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:10:18.388750   32075 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:10:18.476174   32075 ssh_runner.go:195] Run: systemctl --version
	I0729 01:10:18.483045   32075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:10:18.501653   32075 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:10:18.501688   32075 api_server.go:166] Checking apiserver status ...
	I0729 01:10:18.501728   32075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:10:18.518556   32075 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0729 01:10:18.530658   32075 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:10:18.530717   32075 ssh_runner.go:195] Run: ls
	I0729 01:10:18.535921   32075 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:10:18.540093   32075 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:10:18.540112   32075 status.go:422] ha-845088 apiserver status = Running (err=<nil>)
	I0729 01:10:18.540121   32075 status.go:257] ha-845088 status: &{Name:ha-845088 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:10:18.540139   32075 status.go:255] checking status of ha-845088-m02 ...
	I0729 01:10:18.540441   32075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:18.540478   32075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:18.555235   32075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39563
	I0729 01:10:18.555641   32075 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:18.556121   32075 main.go:141] libmachine: Using API Version  1
	I0729 01:10:18.556140   32075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:18.556431   32075 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:18.556625   32075 main.go:141] libmachine: (ha-845088-m02) Calling .GetState
	I0729 01:10:18.558200   32075 status.go:330] ha-845088-m02 host status = "Running" (err=<nil>)
	I0729 01:10:18.558218   32075 host.go:66] Checking if "ha-845088-m02" exists ...
	I0729 01:10:18.558603   32075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:18.558656   32075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:18.573162   32075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42817
	I0729 01:10:18.573623   32075 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:18.574146   32075 main.go:141] libmachine: Using API Version  1
	I0729 01:10:18.574166   32075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:18.574424   32075 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:18.574604   32075 main.go:141] libmachine: (ha-845088-m02) Calling .GetIP
	I0729 01:10:18.577671   32075 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:10:18.578169   32075 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:10:18.578206   32075 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:10:18.578359   32075 host.go:66] Checking if "ha-845088-m02" exists ...
	I0729 01:10:18.578645   32075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:18.578697   32075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:18.593610   32075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41675
	I0729 01:10:18.593975   32075 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:18.594472   32075 main.go:141] libmachine: Using API Version  1
	I0729 01:10:18.594490   32075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:18.594830   32075 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:18.595022   32075 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:10:18.595214   32075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:10:18.595239   32075 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:10:18.598161   32075 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:10:18.598602   32075 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:10:18.598627   32075 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:10:18.598759   32075 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:10:18.598928   32075 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:10:18.599047   32075 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:10:18.599217   32075 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa Username:docker}
	W0729 01:10:37.151340   32075 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0729 01:10:37.151454   32075 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0729 01:10:37.151475   32075 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0729 01:10:37.151488   32075 status.go:257] ha-845088-m02 status: &{Name:ha-845088-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 01:10:37.151512   32075 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0729 01:10:37.151523   32075 status.go:255] checking status of ha-845088-m03 ...
	I0729 01:10:37.151854   32075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:37.151917   32075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:37.167054   32075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32887
	I0729 01:10:37.167463   32075 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:37.168007   32075 main.go:141] libmachine: Using API Version  1
	I0729 01:10:37.168033   32075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:37.168377   32075 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:37.168545   32075 main.go:141] libmachine: (ha-845088-m03) Calling .GetState
	I0729 01:10:37.170053   32075 status.go:330] ha-845088-m03 host status = "Running" (err=<nil>)
	I0729 01:10:37.170072   32075 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:10:37.170429   32075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:37.170465   32075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:37.185037   32075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33007
	I0729 01:10:37.185420   32075 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:37.185898   32075 main.go:141] libmachine: Using API Version  1
	I0729 01:10:37.185919   32075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:37.186279   32075 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:37.186463   32075 main.go:141] libmachine: (ha-845088-m03) Calling .GetIP
	I0729 01:10:37.189370   32075 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:10:37.189789   32075 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:10:37.189815   32075 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:10:37.189980   32075 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:10:37.190353   32075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:37.190392   32075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:37.204378   32075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46531
	I0729 01:10:37.204789   32075 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:37.205242   32075 main.go:141] libmachine: Using API Version  1
	I0729 01:10:37.205265   32075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:37.205506   32075 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:37.205662   32075 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:10:37.205818   32075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:10:37.205837   32075 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:10:37.208421   32075 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:10:37.208781   32075 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:10:37.208806   32075 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:10:37.208981   32075 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:10:37.209146   32075 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:10:37.209264   32075 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:10:37.209382   32075 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:10:37.296217   32075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:10:37.317812   32075 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:10:37.317846   32075 api_server.go:166] Checking apiserver status ...
	I0729 01:10:37.317884   32075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:10:37.332264   32075 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup
	W0729 01:10:37.341903   32075 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:10:37.341970   32075 ssh_runner.go:195] Run: ls
	I0729 01:10:37.346278   32075 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:10:37.352566   32075 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:10:37.352588   32075 status.go:422] ha-845088-m03 apiserver status = Running (err=<nil>)
	I0729 01:10:37.352595   32075 status.go:257] ha-845088-m03 status: &{Name:ha-845088-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:10:37.352609   32075 status.go:255] checking status of ha-845088-m04 ...
	I0729 01:10:37.352891   32075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:37.352921   32075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:37.367515   32075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39601
	I0729 01:10:37.367934   32075 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:37.368473   32075 main.go:141] libmachine: Using API Version  1
	I0729 01:10:37.368499   32075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:37.368767   32075 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:37.368966   32075 main.go:141] libmachine: (ha-845088-m04) Calling .GetState
	I0729 01:10:37.370465   32075 status.go:330] ha-845088-m04 host status = "Running" (err=<nil>)
	I0729 01:10:37.370483   32075 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:10:37.370773   32075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:37.370808   32075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:37.385159   32075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39303
	I0729 01:10:37.385581   32075 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:37.386058   32075 main.go:141] libmachine: Using API Version  1
	I0729 01:10:37.386082   32075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:37.386364   32075 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:37.386527   32075 main.go:141] libmachine: (ha-845088-m04) Calling .GetIP
	I0729 01:10:37.389347   32075 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:10:37.389833   32075 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:10:37.389858   32075 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:10:37.390019   32075 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:10:37.390321   32075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:37.390365   32075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:37.405324   32075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34885
	I0729 01:10:37.405768   32075 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:37.406252   32075 main.go:141] libmachine: Using API Version  1
	I0729 01:10:37.406271   32075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:37.406517   32075 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:37.406712   32075 main.go:141] libmachine: (ha-845088-m04) Calling .DriverName
	I0729 01:10:37.406895   32075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:10:37.406912   32075 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHHostname
	I0729 01:10:37.409745   32075 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:10:37.410169   32075 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:10:37.410201   32075 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:10:37.410352   32075 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHPort
	I0729 01:10:37.410500   32075 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHKeyPath
	I0729 01:10:37.410649   32075 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHUsername
	I0729 01:10:37.410784   32075 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m04/id_rsa Username:docker}
	I0729 01:10:37.495688   32075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:10:37.512233   32075 status.go:257] ha-845088-m04 status: &{Name:ha-845088-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-845088 -n ha-845088
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-845088 logs -n 25: (1.433966231s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-845088 cp ha-845088-m03:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2637143725/001/cp-test_ha-845088-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m03:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088:/home/docker/cp-test_ha-845088-m03_ha-845088.txt                       |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088 sudo cat                                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m03_ha-845088.txt                                 |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m03:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m02:/home/docker/cp-test_ha-845088-m03_ha-845088-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088-m02 sudo cat                                          | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m03_ha-845088-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m03:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04:/home/docker/cp-test_ha-845088-m03_ha-845088-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088-m04 sudo cat                                          | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m03_ha-845088-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-845088 cp testdata/cp-test.txt                                                | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2637143725/001/cp-test_ha-845088-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088:/home/docker/cp-test_ha-845088-m04_ha-845088.txt                       |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088 sudo cat                                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m04_ha-845088.txt                                 |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m02:/home/docker/cp-test_ha-845088-m04_ha-845088-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088-m02 sudo cat                                          | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m04_ha-845088-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m03:/home/docker/cp-test_ha-845088-m04_ha-845088-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088-m03 sudo cat                                          | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m04_ha-845088-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-845088 node stop m02 -v=7                                                     | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 01:03:12
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 01:03:12.121877   27502 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:03:12.122154   27502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:03:12.122164   27502 out.go:304] Setting ErrFile to fd 2...
	I0729 01:03:12.122168   27502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:03:12.122348   27502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:03:12.122892   27502 out.go:298] Setting JSON to false
	I0729 01:03:12.123711   27502 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2738,"bootTime":1722212254,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 01:03:12.123766   27502 start.go:139] virtualization: kvm guest
	I0729 01:03:12.126179   27502 out.go:177] * [ha-845088] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 01:03:12.127700   27502 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 01:03:12.127697   27502 notify.go:220] Checking for updates...
	I0729 01:03:12.130313   27502 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 01:03:12.131713   27502 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:03:12.133085   27502 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:03:12.134411   27502 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 01:03:12.135783   27502 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 01:03:12.137175   27502 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 01:03:12.172209   27502 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 01:03:12.173552   27502 start.go:297] selected driver: kvm2
	I0729 01:03:12.173562   27502 start.go:901] validating driver "kvm2" against <nil>
	I0729 01:03:12.173572   27502 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 01:03:12.174224   27502 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:03:12.174292   27502 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 01:03:12.189041   27502 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 01:03:12.189114   27502 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 01:03:12.189323   27502 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 01:03:12.189349   27502 cni.go:84] Creating CNI manager for ""
	I0729 01:03:12.189355   27502 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 01:03:12.189360   27502 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 01:03:12.189418   27502 start.go:340] cluster config:
	{Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0729 01:03:12.189503   27502 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:03:12.191160   27502 out.go:177] * Starting "ha-845088" primary control-plane node in "ha-845088" cluster
	I0729 01:03:12.192391   27502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:03:12.192425   27502 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 01:03:12.192436   27502 cache.go:56] Caching tarball of preloaded images
	I0729 01:03:12.192516   27502 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 01:03:12.192529   27502 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 01:03:12.192821   27502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:03:12.192841   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json: {Name:mkf0b69659feb56f46b54c3a61f0315d19af49eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:03:12.192976   27502 start.go:360] acquireMachinesLock for ha-845088: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 01:03:12.193009   27502 start.go:364] duration metric: took 17.052µs to acquireMachinesLock for "ha-845088"
	I0729 01:03:12.193030   27502 start.go:93] Provisioning new machine with config: &{Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:03:12.193098   27502 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 01:03:12.194890   27502 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 01:03:12.195002   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:03:12.195037   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:03:12.208952   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36375
	I0729 01:03:12.209335   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:03:12.209831   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:03:12.209846   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:03:12.210186   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:03:12.210362   27502 main.go:141] libmachine: (ha-845088) Calling .GetMachineName
	I0729 01:03:12.210532   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:03:12.210704   27502 start.go:159] libmachine.API.Create for "ha-845088" (driver="kvm2")
	I0729 01:03:12.210730   27502 client.go:168] LocalClient.Create starting
	I0729 01:03:12.210754   27502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem
	I0729 01:03:12.210787   27502 main.go:141] libmachine: Decoding PEM data...
	I0729 01:03:12.210800   27502 main.go:141] libmachine: Parsing certificate...
	I0729 01:03:12.210853   27502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem
	I0729 01:03:12.210871   27502 main.go:141] libmachine: Decoding PEM data...
	I0729 01:03:12.210884   27502 main.go:141] libmachine: Parsing certificate...
	I0729 01:03:12.210900   27502 main.go:141] libmachine: Running pre-create checks...
	I0729 01:03:12.210912   27502 main.go:141] libmachine: (ha-845088) Calling .PreCreateCheck
	I0729 01:03:12.211247   27502 main.go:141] libmachine: (ha-845088) Calling .GetConfigRaw
	I0729 01:03:12.211598   27502 main.go:141] libmachine: Creating machine...
	I0729 01:03:12.211612   27502 main.go:141] libmachine: (ha-845088) Calling .Create
	I0729 01:03:12.211746   27502 main.go:141] libmachine: (ha-845088) Creating KVM machine...
	I0729 01:03:12.213004   27502 main.go:141] libmachine: (ha-845088) DBG | found existing default KVM network
	I0729 01:03:12.213700   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:12.213583   27525 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0729 01:03:12.213717   27502 main.go:141] libmachine: (ha-845088) DBG | created network xml: 
	I0729 01:03:12.213728   27502 main.go:141] libmachine: (ha-845088) DBG | <network>
	I0729 01:03:12.213741   27502 main.go:141] libmachine: (ha-845088) DBG |   <name>mk-ha-845088</name>
	I0729 01:03:12.213750   27502 main.go:141] libmachine: (ha-845088) DBG |   <dns enable='no'/>
	I0729 01:03:12.213757   27502 main.go:141] libmachine: (ha-845088) DBG |   
	I0729 01:03:12.213768   27502 main.go:141] libmachine: (ha-845088) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 01:03:12.213779   27502 main.go:141] libmachine: (ha-845088) DBG |     <dhcp>
	I0729 01:03:12.213786   27502 main.go:141] libmachine: (ha-845088) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 01:03:12.213792   27502 main.go:141] libmachine: (ha-845088) DBG |     </dhcp>
	I0729 01:03:12.213806   27502 main.go:141] libmachine: (ha-845088) DBG |   </ip>
	I0729 01:03:12.213819   27502 main.go:141] libmachine: (ha-845088) DBG |   
	I0729 01:03:12.213831   27502 main.go:141] libmachine: (ha-845088) DBG | </network>
	I0729 01:03:12.213845   27502 main.go:141] libmachine: (ha-845088) DBG | 
	I0729 01:03:12.218774   27502 main.go:141] libmachine: (ha-845088) DBG | trying to create private KVM network mk-ha-845088 192.168.39.0/24...
	I0729 01:03:12.283925   27502 main.go:141] libmachine: (ha-845088) DBG | private KVM network mk-ha-845088 192.168.39.0/24 created
	I0729 01:03:12.283965   27502 main.go:141] libmachine: (ha-845088) Setting up store path in /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088 ...
	I0729 01:03:12.283979   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:12.283913   27525 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:03:12.284066   27502 main.go:141] libmachine: (ha-845088) Building disk image from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 01:03:12.284085   27502 main.go:141] libmachine: (ha-845088) Downloading /home/jenkins/minikube-integration/19312-9421/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 01:03:12.517784   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:12.517610   27525 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa...
	I0729 01:03:12.638198   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:12.638078   27525 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/ha-845088.rawdisk...
	I0729 01:03:12.638239   27502 main.go:141] libmachine: (ha-845088) DBG | Writing magic tar header
	I0729 01:03:12.638254   27502 main.go:141] libmachine: (ha-845088) DBG | Writing SSH key tar header
	I0729 01:03:12.638303   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:12.638214   27525 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088 ...
	I0729 01:03:12.638351   27502 main.go:141] libmachine: (ha-845088) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088
	I0729 01:03:12.638379   27502 main.go:141] libmachine: (ha-845088) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines
	I0729 01:03:12.638391   27502 main.go:141] libmachine: (ha-845088) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088 (perms=drwx------)
	I0729 01:03:12.638405   27502 main.go:141] libmachine: (ha-845088) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines (perms=drwxr-xr-x)
	I0729 01:03:12.638415   27502 main.go:141] libmachine: (ha-845088) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube (perms=drwxr-xr-x)
	I0729 01:03:12.638429   27502 main.go:141] libmachine: (ha-845088) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421 (perms=drwxrwxr-x)
	I0729 01:03:12.638442   27502 main.go:141] libmachine: (ha-845088) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 01:03:12.638456   27502 main.go:141] libmachine: (ha-845088) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 01:03:12.638481   27502 main.go:141] libmachine: (ha-845088) Creating domain...
	I0729 01:03:12.638494   27502 main.go:141] libmachine: (ha-845088) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:03:12.638509   27502 main.go:141] libmachine: (ha-845088) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421
	I0729 01:03:12.638522   27502 main.go:141] libmachine: (ha-845088) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 01:03:12.638536   27502 main.go:141] libmachine: (ha-845088) DBG | Checking permissions on dir: /home/jenkins
	I0729 01:03:12.638552   27502 main.go:141] libmachine: (ha-845088) DBG | Checking permissions on dir: /home
	I0729 01:03:12.638567   27502 main.go:141] libmachine: (ha-845088) DBG | Skipping /home - not owner
	I0729 01:03:12.639556   27502 main.go:141] libmachine: (ha-845088) define libvirt domain using xml: 
	I0729 01:03:12.639580   27502 main.go:141] libmachine: (ha-845088) <domain type='kvm'>
	I0729 01:03:12.639590   27502 main.go:141] libmachine: (ha-845088)   <name>ha-845088</name>
	I0729 01:03:12.639600   27502 main.go:141] libmachine: (ha-845088)   <memory unit='MiB'>2200</memory>
	I0729 01:03:12.639629   27502 main.go:141] libmachine: (ha-845088)   <vcpu>2</vcpu>
	I0729 01:03:12.639658   27502 main.go:141] libmachine: (ha-845088)   <features>
	I0729 01:03:12.639671   27502 main.go:141] libmachine: (ha-845088)     <acpi/>
	I0729 01:03:12.639681   27502 main.go:141] libmachine: (ha-845088)     <apic/>
	I0729 01:03:12.639691   27502 main.go:141] libmachine: (ha-845088)     <pae/>
	I0729 01:03:12.639703   27502 main.go:141] libmachine: (ha-845088)     
	I0729 01:03:12.639714   27502 main.go:141] libmachine: (ha-845088)   </features>
	I0729 01:03:12.639726   27502 main.go:141] libmachine: (ha-845088)   <cpu mode='host-passthrough'>
	I0729 01:03:12.639745   27502 main.go:141] libmachine: (ha-845088)   
	I0729 01:03:12.639759   27502 main.go:141] libmachine: (ha-845088)   </cpu>
	I0729 01:03:12.639776   27502 main.go:141] libmachine: (ha-845088)   <os>
	I0729 01:03:12.639783   27502 main.go:141] libmachine: (ha-845088)     <type>hvm</type>
	I0729 01:03:12.639794   27502 main.go:141] libmachine: (ha-845088)     <boot dev='cdrom'/>
	I0729 01:03:12.639801   27502 main.go:141] libmachine: (ha-845088)     <boot dev='hd'/>
	I0729 01:03:12.639807   27502 main.go:141] libmachine: (ha-845088)     <bootmenu enable='no'/>
	I0729 01:03:12.639813   27502 main.go:141] libmachine: (ha-845088)   </os>
	I0729 01:03:12.639818   27502 main.go:141] libmachine: (ha-845088)   <devices>
	I0729 01:03:12.639825   27502 main.go:141] libmachine: (ha-845088)     <disk type='file' device='cdrom'>
	I0729 01:03:12.639833   27502 main.go:141] libmachine: (ha-845088)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/boot2docker.iso'/>
	I0729 01:03:12.639840   27502 main.go:141] libmachine: (ha-845088)       <target dev='hdc' bus='scsi'/>
	I0729 01:03:12.639845   27502 main.go:141] libmachine: (ha-845088)       <readonly/>
	I0729 01:03:12.639851   27502 main.go:141] libmachine: (ha-845088)     </disk>
	I0729 01:03:12.639857   27502 main.go:141] libmachine: (ha-845088)     <disk type='file' device='disk'>
	I0729 01:03:12.639865   27502 main.go:141] libmachine: (ha-845088)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 01:03:12.639872   27502 main.go:141] libmachine: (ha-845088)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/ha-845088.rawdisk'/>
	I0729 01:03:12.639879   27502 main.go:141] libmachine: (ha-845088)       <target dev='hda' bus='virtio'/>
	I0729 01:03:12.639884   27502 main.go:141] libmachine: (ha-845088)     </disk>
	I0729 01:03:12.639891   27502 main.go:141] libmachine: (ha-845088)     <interface type='network'>
	I0729 01:03:12.639908   27502 main.go:141] libmachine: (ha-845088)       <source network='mk-ha-845088'/>
	I0729 01:03:12.639924   27502 main.go:141] libmachine: (ha-845088)       <model type='virtio'/>
	I0729 01:03:12.639938   27502 main.go:141] libmachine: (ha-845088)     </interface>
	I0729 01:03:12.639950   27502 main.go:141] libmachine: (ha-845088)     <interface type='network'>
	I0729 01:03:12.639975   27502 main.go:141] libmachine: (ha-845088)       <source network='default'/>
	I0729 01:03:12.639986   27502 main.go:141] libmachine: (ha-845088)       <model type='virtio'/>
	I0729 01:03:12.639998   27502 main.go:141] libmachine: (ha-845088)     </interface>
	I0729 01:03:12.640013   27502 main.go:141] libmachine: (ha-845088)     <serial type='pty'>
	I0729 01:03:12.640025   27502 main.go:141] libmachine: (ha-845088)       <target port='0'/>
	I0729 01:03:12.640034   27502 main.go:141] libmachine: (ha-845088)     </serial>
	I0729 01:03:12.640042   27502 main.go:141] libmachine: (ha-845088)     <console type='pty'>
	I0729 01:03:12.640051   27502 main.go:141] libmachine: (ha-845088)       <target type='serial' port='0'/>
	I0729 01:03:12.640063   27502 main.go:141] libmachine: (ha-845088)     </console>
	I0729 01:03:12.640073   27502 main.go:141] libmachine: (ha-845088)     <rng model='virtio'>
	I0729 01:03:12.640085   27502 main.go:141] libmachine: (ha-845088)       <backend model='random'>/dev/random</backend>
	I0729 01:03:12.640106   27502 main.go:141] libmachine: (ha-845088)     </rng>
	I0729 01:03:12.640116   27502 main.go:141] libmachine: (ha-845088)     
	I0729 01:03:12.640123   27502 main.go:141] libmachine: (ha-845088)     
	I0729 01:03:12.640135   27502 main.go:141] libmachine: (ha-845088)   </devices>
	I0729 01:03:12.640144   27502 main.go:141] libmachine: (ha-845088) </domain>
	I0729 01:03:12.640158   27502 main.go:141] libmachine: (ha-845088) 
	I0729 01:03:12.644333   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:ad:7c:e6 in network default
	I0729 01:03:12.644849   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:12.644881   27502 main.go:141] libmachine: (ha-845088) Ensuring networks are active...
	I0729 01:03:12.645555   27502 main.go:141] libmachine: (ha-845088) Ensuring network default is active
	I0729 01:03:12.645997   27502 main.go:141] libmachine: (ha-845088) Ensuring network mk-ha-845088 is active
	I0729 01:03:12.646730   27502 main.go:141] libmachine: (ha-845088) Getting domain xml...
	I0729 01:03:12.647542   27502 main.go:141] libmachine: (ha-845088) Creating domain...
	I0729 01:03:13.820993   27502 main.go:141] libmachine: (ha-845088) Waiting to get IP...
	I0729 01:03:13.821909   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:13.822249   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:13.822301   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:13.822244   27525 retry.go:31] will retry after 205.352697ms: waiting for machine to come up
	I0729 01:03:14.029845   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:14.030257   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:14.030278   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:14.030223   27525 retry.go:31] will retry after 381.277024ms: waiting for machine to come up
	I0729 01:03:14.412699   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:14.413153   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:14.413174   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:14.413118   27525 retry.go:31] will retry after 305.705256ms: waiting for machine to come up
	I0729 01:03:14.720560   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:14.721032   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:14.721060   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:14.720984   27525 retry.go:31] will retry after 500.779269ms: waiting for machine to come up
	I0729 01:03:15.223870   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:15.224247   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:15.224273   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:15.224207   27525 retry.go:31] will retry after 590.26977ms: waiting for machine to come up
	I0729 01:03:15.815920   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:15.816426   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:15.816455   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:15.816358   27525 retry.go:31] will retry after 629.065185ms: waiting for machine to come up
	I0729 01:03:16.446722   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:16.447120   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:16.447262   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:16.447079   27525 retry.go:31] will retry after 1.124983475s: waiting for machine to come up
	I0729 01:03:17.575308   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:17.575769   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:17.575795   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:17.575726   27525 retry.go:31] will retry after 1.148377221s: waiting for machine to come up
	I0729 01:03:18.726112   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:18.726642   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:18.726669   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:18.726593   27525 retry.go:31] will retry after 1.423289352s: waiting for machine to come up
	I0729 01:03:20.152088   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:20.152694   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:20.152722   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:20.152660   27525 retry.go:31] will retry after 1.626608206s: waiting for machine to come up
	I0729 01:03:21.780646   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:21.781164   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:21.781192   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:21.781112   27525 retry.go:31] will retry after 2.526440066s: waiting for machine to come up
	I0729 01:03:24.308850   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:24.309278   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:24.309301   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:24.309206   27525 retry.go:31] will retry after 3.090555813s: waiting for machine to come up
	I0729 01:03:27.400891   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:27.401316   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:27.401339   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:27.401277   27525 retry.go:31] will retry after 4.468642103s: waiting for machine to come up
	I0729 01:03:31.874856   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:31.875259   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:31.875283   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:31.875211   27525 retry.go:31] will retry after 5.199836841s: waiting for machine to come up
	I0729 01:03:37.080567   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.080957   27502 main.go:141] libmachine: (ha-845088) Found IP for machine: 192.168.39.69
	I0729 01:03:37.080988   27502 main.go:141] libmachine: (ha-845088) Reserving static IP address...
	I0729 01:03:37.081001   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has current primary IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.081366   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find host DHCP lease matching {name: "ha-845088", mac: "52:54:00:9a:b1:bc", ip: "192.168.39.69"} in network mk-ha-845088
	I0729 01:03:37.152760   27502 main.go:141] libmachine: (ha-845088) DBG | Getting to WaitForSSH function...
	I0729 01:03:37.152790   27502 main.go:141] libmachine: (ha-845088) Reserved static IP address: 192.168.39.69
	I0729 01:03:37.152804   27502 main.go:141] libmachine: (ha-845088) Waiting for SSH to be available...
	I0729 01:03:37.155421   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.155801   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:37.155825   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.156015   27502 main.go:141] libmachine: (ha-845088) DBG | Using SSH client type: external
	I0729 01:03:37.156037   27502 main.go:141] libmachine: (ha-845088) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa (-rw-------)
	I0729 01:03:37.156119   27502 main.go:141] libmachine: (ha-845088) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 01:03:37.156136   27502 main.go:141] libmachine: (ha-845088) DBG | About to run SSH command:
	I0729 01:03:37.156148   27502 main.go:141] libmachine: (ha-845088) DBG | exit 0
	I0729 01:03:37.278974   27502 main.go:141] libmachine: (ha-845088) DBG | SSH cmd err, output: <nil>: 
	I0729 01:03:37.279326   27502 main.go:141] libmachine: (ha-845088) KVM machine creation complete!
	I0729 01:03:37.279654   27502 main.go:141] libmachine: (ha-845088) Calling .GetConfigRaw
	I0729 01:03:37.280204   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:03:37.280393   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:03:37.280580   27502 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 01:03:37.280597   27502 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:03:37.281805   27502 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 01:03:37.281821   27502 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 01:03:37.281826   27502 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 01:03:37.281831   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:37.284074   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.284468   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:37.284494   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.284678   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:37.284825   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.284934   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.285053   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:37.285229   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:03:37.285454   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:03:37.285473   27502 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 01:03:37.386635   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:03:37.386660   27502 main.go:141] libmachine: Detecting the provisioner...
	I0729 01:03:37.386668   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:37.389325   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.389644   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:37.389663   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.389832   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:37.390004   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.390166   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.390287   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:37.390513   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:03:37.390713   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:03:37.390728   27502 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 01:03:37.491706   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 01:03:37.491792   27502 main.go:141] libmachine: found compatible host: buildroot
	I0729 01:03:37.491804   27502 main.go:141] libmachine: Provisioning with buildroot...
	I0729 01:03:37.491812   27502 main.go:141] libmachine: (ha-845088) Calling .GetMachineName
	I0729 01:03:37.492053   27502 buildroot.go:166] provisioning hostname "ha-845088"
	I0729 01:03:37.492077   27502 main.go:141] libmachine: (ha-845088) Calling .GetMachineName
	I0729 01:03:37.492254   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:37.494745   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.495168   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:37.495192   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.495410   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:37.495587   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.495739   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.495861   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:37.496029   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:03:37.496232   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:03:37.496250   27502 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-845088 && echo "ha-845088" | sudo tee /etc/hostname
	I0729 01:03:37.618225   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-845088
	
	I0729 01:03:37.618266   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:37.620877   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.621184   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:37.621210   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.621397   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:37.621568   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.621723   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.621844   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:37.621992   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:03:37.622172   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:03:37.622194   27502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-845088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-845088/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-845088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 01:03:37.733775   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:03:37.733819   27502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 01:03:37.733859   27502 buildroot.go:174] setting up certificates
	I0729 01:03:37.733872   27502 provision.go:84] configureAuth start
	I0729 01:03:37.733884   27502 main.go:141] libmachine: (ha-845088) Calling .GetMachineName
	I0729 01:03:37.734131   27502 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:03:37.736925   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.737265   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:37.737290   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.737477   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:37.739915   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.740277   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:37.740301   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.740442   27502 provision.go:143] copyHostCerts
	I0729 01:03:37.740471   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:03:37.740510   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 01:03:37.740536   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:03:37.740620   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 01:03:37.740719   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:03:37.740738   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 01:03:37.740745   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:03:37.740773   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 01:03:37.740865   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:03:37.740884   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 01:03:37.740891   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:03:37.740913   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 01:03:37.740979   27502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.ha-845088 san=[127.0.0.1 192.168.39.69 ha-845088 localhost minikube]
	I0729 01:03:37.994395   27502 provision.go:177] copyRemoteCerts
	I0729 01:03:37.994454   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 01:03:37.994474   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:37.997273   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.997552   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:37.997579   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.997745   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:37.997931   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.998079   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:37.998329   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:03:38.077580   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 01:03:38.077663   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 01:03:38.101819   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 01:03:38.101886   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 01:03:38.125528   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 01:03:38.125601   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 01:03:38.152494   27502 provision.go:87] duration metric: took 418.607353ms to configureAuth
	I0729 01:03:38.152529   27502 buildroot.go:189] setting minikube options for container-runtime
	I0729 01:03:38.152846   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:03:38.152970   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:38.155443   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.155899   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:38.155927   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.156064   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:38.156257   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:38.156434   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:38.156561   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:38.156695   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:03:38.156884   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:03:38.156902   27502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 01:03:38.415551   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 01:03:38.415592   27502 main.go:141] libmachine: Checking connection to Docker...
	I0729 01:03:38.415605   27502 main.go:141] libmachine: (ha-845088) Calling .GetURL
	I0729 01:03:38.416978   27502 main.go:141] libmachine: (ha-845088) DBG | Using libvirt version 6000000
	I0729 01:03:38.419133   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.419491   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:38.419520   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.419668   27502 main.go:141] libmachine: Docker is up and running!
	I0729 01:03:38.419680   27502 main.go:141] libmachine: Reticulating splines...
	I0729 01:03:38.419688   27502 client.go:171] duration metric: took 26.20895079s to LocalClient.Create
	I0729 01:03:38.419712   27502 start.go:167] duration metric: took 26.209010013s to libmachine.API.Create "ha-845088"
	I0729 01:03:38.419725   27502 start.go:293] postStartSetup for "ha-845088" (driver="kvm2")
	I0729 01:03:38.419739   27502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 01:03:38.419760   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:03:38.419968   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 01:03:38.419987   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:38.421740   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.422019   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:38.422047   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.422145   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:38.422372   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:38.422520   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:38.422734   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:03:38.505848   27502 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 01:03:38.510137   27502 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 01:03:38.510159   27502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 01:03:38.510215   27502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 01:03:38.510280   27502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 01:03:38.510289   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /etc/ssl/certs/166232.pem
	I0729 01:03:38.510370   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 01:03:38.519588   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:03:38.542496   27502 start.go:296] duration metric: took 122.758329ms for postStartSetup
	I0729 01:03:38.542538   27502 main.go:141] libmachine: (ha-845088) Calling .GetConfigRaw
	I0729 01:03:38.543090   27502 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:03:38.546090   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.546423   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:38.546446   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.546709   27502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:03:38.546880   27502 start.go:128] duration metric: took 26.353773114s to createHost
	I0729 01:03:38.546927   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:38.549434   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.549758   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:38.549780   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.549920   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:38.550087   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:38.550241   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:38.550360   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:38.550492   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:03:38.550654   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:03:38.550666   27502 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 01:03:38.651773   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722215018.631808760
	
	I0729 01:03:38.651793   27502 fix.go:216] guest clock: 1722215018.631808760
	I0729 01:03:38.651869   27502 fix.go:229] Guest: 2024-07-29 01:03:38.63180876 +0000 UTC Remote: 2024-07-29 01:03:38.546890712 +0000 UTC m=+26.463181015 (delta=84.918048ms)
	I0729 01:03:38.651965   27502 fix.go:200] guest clock delta is within tolerance: 84.918048ms
	I0729 01:03:38.651975   27502 start.go:83] releasing machines lock for "ha-845088", held for 26.458954029s
	I0729 01:03:38.652007   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:03:38.652291   27502 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:03:38.655227   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.655577   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:38.655603   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.655776   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:03:38.656397   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:03:38.656575   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:03:38.656649   27502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 01:03:38.656695   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:38.656827   27502 ssh_runner.go:195] Run: cat /version.json
	I0729 01:03:38.656854   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:38.659471   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.659499   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.659851   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:38.659886   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:38.659906   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.659923   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.659978   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:38.660047   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:38.660193   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:38.660284   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:38.660352   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:38.660412   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:38.660481   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:03:38.660537   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:03:38.763188   27502 ssh_runner.go:195] Run: systemctl --version
	I0729 01:03:38.769051   27502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 01:03:38.928651   27502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 01:03:38.934880   27502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 01:03:38.934938   27502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 01:03:38.951248   27502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 01:03:38.951269   27502 start.go:495] detecting cgroup driver to use...
	I0729 01:03:38.951322   27502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 01:03:38.966590   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 01:03:38.980253   27502 docker.go:217] disabling cri-docker service (if available) ...
	I0729 01:03:38.980300   27502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 01:03:38.993611   27502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 01:03:39.006971   27502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 01:03:39.115717   27502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 01:03:39.249891   27502 docker.go:233] disabling docker service ...
	I0729 01:03:39.249954   27502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 01:03:39.264041   27502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 01:03:39.277314   27502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 01:03:39.405886   27502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 01:03:39.513242   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 01:03:39.526652   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 01:03:39.544453   27502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 01:03:39.544506   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:03:39.554325   27502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 01:03:39.554375   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:03:39.564401   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:03:39.574340   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:03:39.584435   27502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 01:03:39.595028   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:03:39.605150   27502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:03:39.622334   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:03:39.632242   27502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 01:03:39.641458   27502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 01:03:39.641509   27502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 01:03:39.654339   27502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 01:03:39.663905   27502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:03:39.773045   27502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 01:03:39.919080   27502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 01:03:39.919152   27502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 01:03:39.923762   27502 start.go:563] Will wait 60s for crictl version
	I0729 01:03:39.923821   27502 ssh_runner.go:195] Run: which crictl
	I0729 01:03:39.927598   27502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 01:03:39.968591   27502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 01:03:39.968665   27502 ssh_runner.go:195] Run: crio --version
	I0729 01:03:39.996574   27502 ssh_runner.go:195] Run: crio --version
	I0729 01:03:40.026801   27502 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 01:03:40.027835   27502 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:03:40.030475   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:40.030944   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:40.030970   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:40.031236   27502 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 01:03:40.035284   27502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 01:03:40.048244   27502 kubeadm.go:883] updating cluster {Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 01:03:40.048358   27502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:03:40.048399   27502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:03:40.081350   27502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 01:03:40.081420   27502 ssh_runner.go:195] Run: which lz4
	I0729 01:03:40.085479   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0729 01:03:40.085576   27502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 01:03:40.089825   27502 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 01:03:40.089857   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 01:03:41.478198   27502 crio.go:462] duration metric: took 1.392656825s to copy over tarball
	I0729 01:03:41.478261   27502 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 01:03:43.576178   27502 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.097890941s)
	I0729 01:03:43.576205   27502 crio.go:469] duration metric: took 2.097983811s to extract the tarball
	I0729 01:03:43.576212   27502 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 01:03:43.613781   27502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:03:43.661358   27502 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 01:03:43.661380   27502 cache_images.go:84] Images are preloaded, skipping loading
	I0729 01:03:43.661388   27502 kubeadm.go:934] updating node { 192.168.39.69 8443 v1.30.3 crio true true} ...
	I0729 01:03:43.661491   27502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-845088 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 01:03:43.661570   27502 ssh_runner.go:195] Run: crio config
	I0729 01:03:43.707003   27502 cni.go:84] Creating CNI manager for ""
	I0729 01:03:43.707027   27502 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 01:03:43.707035   27502 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 01:03:43.707055   27502 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-845088 NodeName:ha-845088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 01:03:43.707253   27502 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-845088"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 01:03:43.707289   27502 kube-vip.go:115] generating kube-vip config ...
	I0729 01:03:43.707329   27502 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 01:03:43.724749   27502 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 01:03:43.724858   27502 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0729 01:03:43.724909   27502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 01:03:43.734386   27502 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 01:03:43.734438   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 01:03:43.743325   27502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0729 01:03:43.759839   27502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 01:03:43.776186   27502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0729 01:03:43.792929   27502 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0729 01:03:43.809209   27502 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 01:03:43.813190   27502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 01:03:43.825580   27502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:03:43.939758   27502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:03:43.956174   27502 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088 for IP: 192.168.39.69
	I0729 01:03:43.956193   27502 certs.go:194] generating shared ca certs ...
	I0729 01:03:43.956207   27502 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:03:43.956372   27502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 01:03:43.956429   27502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 01:03:43.956443   27502 certs.go:256] generating profile certs ...
	I0729 01:03:43.956507   27502 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key
	I0729 01:03:43.956525   27502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.crt with IP's: []
	I0729 01:03:44.224079   27502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.crt ...
	I0729 01:03:44.224108   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.crt: {Name:mkbb4d0179849c0921fee0deff743f9640d04c5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:03:44.224266   27502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key ...
	I0729 01:03:44.224277   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key: {Name:mk45884c5b38065ca1050aae4f24fc7278238f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:03:44.224355   27502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.c7fdf3a4
	I0729 01:03:44.224369   27502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.c7fdf3a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.254]
	I0729 01:03:44.428782   27502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.c7fdf3a4 ...
	I0729 01:03:44.428812   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.c7fdf3a4: {Name:mk76a6c23b190fdfad7f1063ffe365289899ef62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:03:44.428966   27502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.c7fdf3a4 ...
	I0729 01:03:44.428978   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.c7fdf3a4: {Name:mk95b0589efbe991df6cd9765c9a01073f882d82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:03:44.429050   27502 certs.go:381] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.c7fdf3a4 -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt
	I0729 01:03:44.429116   27502 certs.go:385] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.c7fdf3a4 -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key
	I0729 01:03:44.429164   27502 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key
	I0729 01:03:44.429178   27502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt with IP's: []
	I0729 01:03:44.483832   27502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt ...
	I0729 01:03:44.483859   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt: {Name:mk686bd0f2ed47a16e90530b62f805f556e01d5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:03:44.484000   27502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key ...
	I0729 01:03:44.484010   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key: {Name:mkf175ff6357a3a134578e52096b66d046e1dc3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:03:44.484073   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 01:03:44.484087   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 01:03:44.484100   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 01:03:44.484120   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 01:03:44.484135   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 01:03:44.484148   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 01:03:44.484157   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 01:03:44.484169   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 01:03:44.484219   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 01:03:44.484251   27502 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 01:03:44.484260   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 01:03:44.484278   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 01:03:44.484298   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 01:03:44.484322   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 01:03:44.484361   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:03:44.484385   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:03:44.484397   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem -> /usr/share/ca-certificates/16623.pem
	I0729 01:03:44.484408   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /usr/share/ca-certificates/166232.pem
	I0729 01:03:44.484938   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 01:03:44.511156   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 01:03:44.535259   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 01:03:44.560941   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 01:03:44.584571   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 01:03:44.610596   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 01:03:44.634708   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 01:03:44.659226   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 01:03:44.683196   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 01:03:44.705864   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 01:03:44.731688   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 01:03:44.754701   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 01:03:44.781252   27502 ssh_runner.go:195] Run: openssl version
	I0729 01:03:44.791545   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 01:03:44.807347   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:03:44.815496   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:03:44.815549   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:03:44.821732   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 01:03:44.832529   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 01:03:44.843275   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 01:03:44.847784   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 01:03:44.847831   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 01:03:44.853560   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 01:03:44.864183   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 01:03:44.874389   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 01:03:44.879085   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 01:03:44.879135   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 01:03:44.885177   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 01:03:44.895553   27502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 01:03:44.899597   27502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 01:03:44.899653   27502 kubeadm.go:392] StartCluster: {Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:03:44.899739   27502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 01:03:44.899798   27502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 01:03:44.940375   27502 cri.go:89] found id: ""
	I0729 01:03:44.940469   27502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 01:03:44.950263   27502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 01:03:44.964111   27502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 01:03:44.974217   27502 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 01:03:44.974234   27502 kubeadm.go:157] found existing configuration files:
	
	I0729 01:03:44.974284   27502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 01:03:44.984164   27502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 01:03:44.984230   27502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 01:03:44.994145   27502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 01:03:45.003670   27502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 01:03:45.003724   27502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 01:03:45.013528   27502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 01:03:45.023044   27502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 01:03:45.023124   27502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 01:03:45.032959   27502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 01:03:45.042034   27502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 01:03:45.042102   27502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 01:03:45.052601   27502 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 01:03:45.311557   27502 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 01:03:56.764039   27502 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 01:03:56.764102   27502 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 01:03:56.764202   27502 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 01:03:56.764305   27502 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 01:03:56.764412   27502 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 01:03:56.764477   27502 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 01:03:56.765979   27502 out.go:204]   - Generating certificates and keys ...
	I0729 01:03:56.766081   27502 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 01:03:56.766176   27502 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 01:03:56.766283   27502 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 01:03:56.766366   27502 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 01:03:56.766456   27502 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 01:03:56.766523   27502 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 01:03:56.766594   27502 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 01:03:56.766721   27502 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-845088 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	I0729 01:03:56.766802   27502 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 01:03:56.766951   27502 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-845088 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	I0729 01:03:56.767044   27502 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 01:03:56.767158   27502 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 01:03:56.767209   27502 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 01:03:56.767276   27502 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 01:03:56.767332   27502 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 01:03:56.767380   27502 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 01:03:56.767427   27502 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 01:03:56.767483   27502 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 01:03:56.767528   27502 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 01:03:56.767593   27502 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 01:03:56.767650   27502 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 01:03:56.769063   27502 out.go:204]   - Booting up control plane ...
	I0729 01:03:56.769162   27502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 01:03:56.769250   27502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 01:03:56.769323   27502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 01:03:56.769428   27502 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 01:03:56.769527   27502 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 01:03:56.769562   27502 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 01:03:56.769683   27502 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 01:03:56.769754   27502 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 01:03:56.769812   27502 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002033865s
	I0729 01:03:56.769917   27502 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 01:03:56.770002   27502 kubeadm.go:310] [api-check] The API server is healthy after 5.773462821s
	I0729 01:03:56.770153   27502 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 01:03:56.770304   27502 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 01:03:56.770381   27502 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 01:03:56.770570   27502 kubeadm.go:310] [mark-control-plane] Marking the node ha-845088 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 01:03:56.770645   27502 kubeadm.go:310] [bootstrap-token] Using token: wba6wh.0wq67cx7p2t5liwh
	I0729 01:03:56.771907   27502 out.go:204]   - Configuring RBAC rules ...
	I0729 01:03:56.772013   27502 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 01:03:56.772128   27502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 01:03:56.772308   27502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 01:03:56.772435   27502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 01:03:56.772550   27502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 01:03:56.772648   27502 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 01:03:56.772792   27502 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 01:03:56.772869   27502 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 01:03:56.772923   27502 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 01:03:56.772931   27502 kubeadm.go:310] 
	I0729 01:03:56.772993   27502 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 01:03:56.773002   27502 kubeadm.go:310] 
	I0729 01:03:56.773106   27502 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 01:03:56.773114   27502 kubeadm.go:310] 
	I0729 01:03:56.773146   27502 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 01:03:56.773204   27502 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 01:03:56.773249   27502 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 01:03:56.773255   27502 kubeadm.go:310] 
	I0729 01:03:56.773301   27502 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 01:03:56.773307   27502 kubeadm.go:310] 
	I0729 01:03:56.773376   27502 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 01:03:56.773386   27502 kubeadm.go:310] 
	I0729 01:03:56.773451   27502 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 01:03:56.773560   27502 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 01:03:56.773650   27502 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 01:03:56.773659   27502 kubeadm.go:310] 
	I0729 01:03:56.773734   27502 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 01:03:56.773809   27502 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 01:03:56.773816   27502 kubeadm.go:310] 
	I0729 01:03:56.773888   27502 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wba6wh.0wq67cx7p2t5liwh \
	I0729 01:03:56.774002   27502 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2259b3e93c5dd9b5daf5a1af8e350826f214305256ac858c5baa518ad685cc90 \
	I0729 01:03:56.774033   27502 kubeadm.go:310] 	--control-plane 
	I0729 01:03:56.774047   27502 kubeadm.go:310] 
	I0729 01:03:56.774151   27502 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 01:03:56.774160   27502 kubeadm.go:310] 
	I0729 01:03:56.774237   27502 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wba6wh.0wq67cx7p2t5liwh \
	I0729 01:03:56.774347   27502 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2259b3e93c5dd9b5daf5a1af8e350826f214305256ac858c5baa518ad685cc90 
	I0729 01:03:56.774360   27502 cni.go:84] Creating CNI manager for ""
	I0729 01:03:56.774369   27502 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 01:03:56.776248   27502 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 01:03:56.777435   27502 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 01:03:56.782945   27502 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 01:03:56.782956   27502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 01:03:56.801793   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 01:03:57.177597   27502 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 01:03:57.177701   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:03:57.177740   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-845088 minikube.k8s.io/updated_at=2024_07_29T01_03_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=ha-845088 minikube.k8s.io/primary=true
	I0729 01:03:57.404875   27502 ops.go:34] apiserver oom_adj: -16
	I0729 01:03:57.404945   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:03:57.905271   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:03:58.405992   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:03:58.905034   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:03:59.405558   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:03:59.905146   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:00.405082   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:00.905104   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:01.405848   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:01.906077   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:02.405996   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:02.905569   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:03.405157   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:03.905634   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:04.405759   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:04.905414   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:05.405910   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:05.905062   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:06.405553   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:06.905623   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:07.405304   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:07.905076   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:08.405716   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:08.905707   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:09.405988   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:09.520575   27502 kubeadm.go:1113] duration metric: took 12.342932836s to wait for elevateKubeSystemPrivileges
	I0729 01:04:09.520618   27502 kubeadm.go:394] duration metric: took 24.62096883s to StartCluster
	I0729 01:04:09.520641   27502 settings.go:142] acquiring lock: {Name:mkb5968d4cb7e70e3ab5eb9e0fafacd5f2b8ffad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:04:09.520735   27502 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:04:09.521863   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/kubeconfig: {Name:mkfc86149281a82bb07035a854bdc5c590b97078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:04:09.522124   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 01:04:09.522122   27502 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:04:09.522150   27502 start.go:241] waiting for startup goroutines ...
	I0729 01:04:09.522167   27502 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 01:04:09.522262   27502 addons.go:69] Setting storage-provisioner=true in profile "ha-845088"
	I0729 01:04:09.522292   27502 addons.go:234] Setting addon storage-provisioner=true in "ha-845088"
	I0729 01:04:09.522318   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:04:09.522335   27502 addons.go:69] Setting default-storageclass=true in profile "ha-845088"
	I0729 01:04:09.522322   27502 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:04:09.522370   27502 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-845088"
	I0729 01:04:09.522817   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:09.522859   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:09.522873   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:09.522919   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:09.537493   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I0729 01:04:09.537661   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33073
	I0729 01:04:09.537989   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:09.538079   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:09.538502   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:09.538518   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:09.538502   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:09.538573   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:09.538879   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:09.538938   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:09.539054   27502 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:04:09.539521   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:09.539565   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:09.542321   27502 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:04:09.542583   27502 kapi.go:59] client config for ha-845088: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key", CAFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 01:04:09.543083   27502 cert_rotation.go:137] Starting client certificate rotation controller
	I0729 01:04:09.543284   27502 addons.go:234] Setting addon default-storageclass=true in "ha-845088"
	I0729 01:04:09.543320   27502 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:04:09.543599   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:09.543625   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:09.554984   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46581
	I0729 01:04:09.555471   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:09.556036   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:09.556068   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:09.556390   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:09.556562   27502 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:04:09.558407   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:04:09.558419   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45855
	I0729 01:04:09.558732   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:09.559174   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:09.559203   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:09.559500   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:09.559968   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:09.560000   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:09.560530   27502 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 01:04:09.561902   27502 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 01:04:09.561921   27502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 01:04:09.561938   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:04:09.565129   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:04:09.565561   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:04:09.565587   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:04:09.565736   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:04:09.565962   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:04:09.566199   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:04:09.566398   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:04:09.575135   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0729 01:04:09.575558   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:09.576098   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:09.576129   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:09.576464   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:09.576631   27502 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:04:09.578328   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:04:09.578541   27502 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 01:04:09.578557   27502 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 01:04:09.578574   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:04:09.581517   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:04:09.581935   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:04:09.581964   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:04:09.582084   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:04:09.582248   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:04:09.582389   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:04:09.582499   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:04:09.637293   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 01:04:09.695581   27502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 01:04:09.734937   27502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 01:04:10.056900   27502 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 01:04:10.369207   27502 main.go:141] libmachine: Making call to close driver server
	I0729 01:04:10.369235   27502 main.go:141] libmachine: (ha-845088) Calling .Close
	I0729 01:04:10.369209   27502 main.go:141] libmachine: Making call to close driver server
	I0729 01:04:10.369298   27502 main.go:141] libmachine: (ha-845088) Calling .Close
	I0729 01:04:10.369534   27502 main.go:141] libmachine: Successfully made call to close driver server
	I0729 01:04:10.369541   27502 main.go:141] libmachine: Successfully made call to close driver server
	I0729 01:04:10.369552   27502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 01:04:10.369555   27502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 01:04:10.369562   27502 main.go:141] libmachine: Making call to close driver server
	I0729 01:04:10.369564   27502 main.go:141] libmachine: Making call to close driver server
	I0729 01:04:10.369570   27502 main.go:141] libmachine: (ha-845088) Calling .Close
	I0729 01:04:10.369573   27502 main.go:141] libmachine: (ha-845088) Calling .Close
	I0729 01:04:10.369804   27502 main.go:141] libmachine: Successfully made call to close driver server
	I0729 01:04:10.369824   27502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 01:04:10.369841   27502 main.go:141] libmachine: Successfully made call to close driver server
	I0729 01:04:10.369849   27502 main.go:141] libmachine: (ha-845088) DBG | Closing plugin on server side
	I0729 01:04:10.369861   27502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 01:04:10.369989   27502 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0729 01:04:10.370001   27502 round_trippers.go:469] Request Headers:
	I0729 01:04:10.370011   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:04:10.370018   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:04:10.385217   27502 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0729 01:04:10.385736   27502 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0729 01:04:10.385749   27502 round_trippers.go:469] Request Headers:
	I0729 01:04:10.385757   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:04:10.385761   27502 round_trippers.go:473]     Content-Type: application/json
	I0729 01:04:10.385768   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:04:10.388548   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:04:10.388683   27502 main.go:141] libmachine: Making call to close driver server
	I0729 01:04:10.388694   27502 main.go:141] libmachine: (ha-845088) Calling .Close
	I0729 01:04:10.388943   27502 main.go:141] libmachine: Successfully made call to close driver server
	I0729 01:04:10.388976   27502 main.go:141] libmachine: (ha-845088) DBG | Closing plugin on server side
	I0729 01:04:10.388987   27502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 01:04:10.390661   27502 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 01:04:10.391903   27502 addons.go:510] duration metric: took 869.739884ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 01:04:10.391937   27502 start.go:246] waiting for cluster config update ...
	I0729 01:04:10.391949   27502 start.go:255] writing updated cluster config ...
	I0729 01:04:10.393367   27502 out.go:177] 
	I0729 01:04:10.394536   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:04:10.394621   27502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:04:10.396227   27502 out.go:177] * Starting "ha-845088-m02" control-plane node in "ha-845088" cluster
	I0729 01:04:10.397301   27502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:04:10.397323   27502 cache.go:56] Caching tarball of preloaded images
	I0729 01:04:10.397415   27502 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 01:04:10.397429   27502 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 01:04:10.397502   27502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:04:10.397661   27502 start.go:360] acquireMachinesLock for ha-845088-m02: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 01:04:10.397711   27502 start.go:364] duration metric: took 30.086µs to acquireMachinesLock for "ha-845088-m02"
	I0729 01:04:10.397735   27502 start.go:93] Provisioning new machine with config: &{Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:04:10.397824   27502 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0729 01:04:10.399120   27502 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 01:04:10.399205   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:10.399230   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:10.413793   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34327
	I0729 01:04:10.414280   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:10.414715   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:10.414740   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:10.415137   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:10.415314   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetMachineName
	I0729 01:04:10.415431   27502 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:04:10.415555   27502 start.go:159] libmachine.API.Create for "ha-845088" (driver="kvm2")
	I0729 01:04:10.415578   27502 client.go:168] LocalClient.Create starting
	I0729 01:04:10.415604   27502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem
	I0729 01:04:10.415633   27502 main.go:141] libmachine: Decoding PEM data...
	I0729 01:04:10.415647   27502 main.go:141] libmachine: Parsing certificate...
	I0729 01:04:10.415693   27502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem
	I0729 01:04:10.415711   27502 main.go:141] libmachine: Decoding PEM data...
	I0729 01:04:10.415721   27502 main.go:141] libmachine: Parsing certificate...
	I0729 01:04:10.415738   27502 main.go:141] libmachine: Running pre-create checks...
	I0729 01:04:10.415746   27502 main.go:141] libmachine: (ha-845088-m02) Calling .PreCreateCheck
	I0729 01:04:10.415902   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetConfigRaw
	I0729 01:04:10.416254   27502 main.go:141] libmachine: Creating machine...
	I0729 01:04:10.416267   27502 main.go:141] libmachine: (ha-845088-m02) Calling .Create
	I0729 01:04:10.416379   27502 main.go:141] libmachine: (ha-845088-m02) Creating KVM machine...
	I0729 01:04:10.417469   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found existing default KVM network
	I0729 01:04:10.417609   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found existing private KVM network mk-ha-845088
	I0729 01:04:10.417725   27502 main.go:141] libmachine: (ha-845088-m02) Setting up store path in /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02 ...
	I0729 01:04:10.417758   27502 main.go:141] libmachine: (ha-845088-m02) Building disk image from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 01:04:10.417797   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:10.417712   27901 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:04:10.417879   27502 main.go:141] libmachine: (ha-845088-m02) Downloading /home/jenkins/minikube-integration/19312-9421/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 01:04:10.644430   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:10.644272   27901 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa...
	I0729 01:04:10.979532   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:10.979397   27901 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/ha-845088-m02.rawdisk...
	I0729 01:04:10.979570   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Writing magic tar header
	I0729 01:04:10.979585   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Writing SSH key tar header
	I0729 01:04:10.979597   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:10.979541   27901 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02 ...
	I0729 01:04:10.979699   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02
	I0729 01:04:10.979729   27502 main.go:141] libmachine: (ha-845088-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02 (perms=drwx------)
	I0729 01:04:10.979740   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines
	I0729 01:04:10.979755   27502 main.go:141] libmachine: (ha-845088-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines (perms=drwxr-xr-x)
	I0729 01:04:10.979772   27502 main.go:141] libmachine: (ha-845088-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube (perms=drwxr-xr-x)
	I0729 01:04:10.979782   27502 main.go:141] libmachine: (ha-845088-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421 (perms=drwxrwxr-x)
	I0729 01:04:10.979791   27502 main.go:141] libmachine: (ha-845088-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 01:04:10.979802   27502 main.go:141] libmachine: (ha-845088-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 01:04:10.979811   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:04:10.979818   27502 main.go:141] libmachine: (ha-845088-m02) Creating domain...
	I0729 01:04:10.979845   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421
	I0729 01:04:10.979868   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 01:04:10.979881   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Checking permissions on dir: /home/jenkins
	I0729 01:04:10.979896   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Checking permissions on dir: /home
	I0729 01:04:10.979910   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Skipping /home - not owner
	I0729 01:04:10.980692   27502 main.go:141] libmachine: (ha-845088-m02) define libvirt domain using xml: 
	I0729 01:04:10.980713   27502 main.go:141] libmachine: (ha-845088-m02) <domain type='kvm'>
	I0729 01:04:10.980725   27502 main.go:141] libmachine: (ha-845088-m02)   <name>ha-845088-m02</name>
	I0729 01:04:10.980730   27502 main.go:141] libmachine: (ha-845088-m02)   <memory unit='MiB'>2200</memory>
	I0729 01:04:10.980736   27502 main.go:141] libmachine: (ha-845088-m02)   <vcpu>2</vcpu>
	I0729 01:04:10.980747   27502 main.go:141] libmachine: (ha-845088-m02)   <features>
	I0729 01:04:10.980753   27502 main.go:141] libmachine: (ha-845088-m02)     <acpi/>
	I0729 01:04:10.980762   27502 main.go:141] libmachine: (ha-845088-m02)     <apic/>
	I0729 01:04:10.980771   27502 main.go:141] libmachine: (ha-845088-m02)     <pae/>
	I0729 01:04:10.980781   27502 main.go:141] libmachine: (ha-845088-m02)     
	I0729 01:04:10.980790   27502 main.go:141] libmachine: (ha-845088-m02)   </features>
	I0729 01:04:10.980798   27502 main.go:141] libmachine: (ha-845088-m02)   <cpu mode='host-passthrough'>
	I0729 01:04:10.980804   27502 main.go:141] libmachine: (ha-845088-m02)   
	I0729 01:04:10.980812   27502 main.go:141] libmachine: (ha-845088-m02)   </cpu>
	I0729 01:04:10.980817   27502 main.go:141] libmachine: (ha-845088-m02)   <os>
	I0729 01:04:10.980824   27502 main.go:141] libmachine: (ha-845088-m02)     <type>hvm</type>
	I0729 01:04:10.980837   27502 main.go:141] libmachine: (ha-845088-m02)     <boot dev='cdrom'/>
	I0729 01:04:10.980850   27502 main.go:141] libmachine: (ha-845088-m02)     <boot dev='hd'/>
	I0729 01:04:10.980876   27502 main.go:141] libmachine: (ha-845088-m02)     <bootmenu enable='no'/>
	I0729 01:04:10.980891   27502 main.go:141] libmachine: (ha-845088-m02)   </os>
	I0729 01:04:10.980900   27502 main.go:141] libmachine: (ha-845088-m02)   <devices>
	I0729 01:04:10.980907   27502 main.go:141] libmachine: (ha-845088-m02)     <disk type='file' device='cdrom'>
	I0729 01:04:10.980942   27502 main.go:141] libmachine: (ha-845088-m02)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/boot2docker.iso'/>
	I0729 01:04:10.980970   27502 main.go:141] libmachine: (ha-845088-m02)       <target dev='hdc' bus='scsi'/>
	I0729 01:04:10.980981   27502 main.go:141] libmachine: (ha-845088-m02)       <readonly/>
	I0729 01:04:10.980991   27502 main.go:141] libmachine: (ha-845088-m02)     </disk>
	I0729 01:04:10.981004   27502 main.go:141] libmachine: (ha-845088-m02)     <disk type='file' device='disk'>
	I0729 01:04:10.981012   27502 main.go:141] libmachine: (ha-845088-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 01:04:10.981039   27502 main.go:141] libmachine: (ha-845088-m02)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/ha-845088-m02.rawdisk'/>
	I0729 01:04:10.981058   27502 main.go:141] libmachine: (ha-845088-m02)       <target dev='hda' bus='virtio'/>
	I0729 01:04:10.981066   27502 main.go:141] libmachine: (ha-845088-m02)     </disk>
	I0729 01:04:10.981074   27502 main.go:141] libmachine: (ha-845088-m02)     <interface type='network'>
	I0729 01:04:10.981081   27502 main.go:141] libmachine: (ha-845088-m02)       <source network='mk-ha-845088'/>
	I0729 01:04:10.981088   27502 main.go:141] libmachine: (ha-845088-m02)       <model type='virtio'/>
	I0729 01:04:10.981093   27502 main.go:141] libmachine: (ha-845088-m02)     </interface>
	I0729 01:04:10.981100   27502 main.go:141] libmachine: (ha-845088-m02)     <interface type='network'>
	I0729 01:04:10.981110   27502 main.go:141] libmachine: (ha-845088-m02)       <source network='default'/>
	I0729 01:04:10.981116   27502 main.go:141] libmachine: (ha-845088-m02)       <model type='virtio'/>
	I0729 01:04:10.981143   27502 main.go:141] libmachine: (ha-845088-m02)     </interface>
	I0729 01:04:10.981168   27502 main.go:141] libmachine: (ha-845088-m02)     <serial type='pty'>
	I0729 01:04:10.981181   27502 main.go:141] libmachine: (ha-845088-m02)       <target port='0'/>
	I0729 01:04:10.981192   27502 main.go:141] libmachine: (ha-845088-m02)     </serial>
	I0729 01:04:10.981203   27502 main.go:141] libmachine: (ha-845088-m02)     <console type='pty'>
	I0729 01:04:10.981210   27502 main.go:141] libmachine: (ha-845088-m02)       <target type='serial' port='0'/>
	I0729 01:04:10.981218   27502 main.go:141] libmachine: (ha-845088-m02)     </console>
	I0729 01:04:10.981229   27502 main.go:141] libmachine: (ha-845088-m02)     <rng model='virtio'>
	I0729 01:04:10.981243   27502 main.go:141] libmachine: (ha-845088-m02)       <backend model='random'>/dev/random</backend>
	I0729 01:04:10.981257   27502 main.go:141] libmachine: (ha-845088-m02)     </rng>
	I0729 01:04:10.981268   27502 main.go:141] libmachine: (ha-845088-m02)     
	I0729 01:04:10.981275   27502 main.go:141] libmachine: (ha-845088-m02)     
	I0729 01:04:10.981283   27502 main.go:141] libmachine: (ha-845088-m02)   </devices>
	I0729 01:04:10.981291   27502 main.go:141] libmachine: (ha-845088-m02) </domain>
	I0729 01:04:10.981300   27502 main.go:141] libmachine: (ha-845088-m02) 
	I0729 01:04:10.987788   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:32:4f:7d in network default
	I0729 01:04:10.988325   27502 main.go:141] libmachine: (ha-845088-m02) Ensuring networks are active...
	I0729 01:04:10.988347   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:10.989028   27502 main.go:141] libmachine: (ha-845088-m02) Ensuring network default is active
	I0729 01:04:10.989314   27502 main.go:141] libmachine: (ha-845088-m02) Ensuring network mk-ha-845088 is active
	I0729 01:04:10.989668   27502 main.go:141] libmachine: (ha-845088-m02) Getting domain xml...
	I0729 01:04:10.990320   27502 main.go:141] libmachine: (ha-845088-m02) Creating domain...
	I0729 01:04:12.182945   27502 main.go:141] libmachine: (ha-845088-m02) Waiting to get IP...
	I0729 01:04:12.184588   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:12.185226   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:12.185256   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:12.185168   27901 retry.go:31] will retry after 289.198233ms: waiting for machine to come up
	I0729 01:04:12.475541   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:12.476018   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:12.476042   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:12.475983   27901 retry.go:31] will retry after 317.394957ms: waiting for machine to come up
	I0729 01:04:12.795522   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:12.796068   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:12.796088   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:12.796026   27901 retry.go:31] will retry after 457.114248ms: waiting for machine to come up
	I0729 01:04:13.254701   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:13.255194   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:13.255224   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:13.255144   27901 retry.go:31] will retry after 595.132323ms: waiting for machine to come up
	I0729 01:04:13.851663   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:13.852282   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:13.852312   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:13.852240   27901 retry.go:31] will retry after 708.119901ms: waiting for machine to come up
	I0729 01:04:14.561481   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:14.561948   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:14.561978   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:14.561907   27901 retry.go:31] will retry after 788.634973ms: waiting for machine to come up
	I0729 01:04:15.352321   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:15.352863   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:15.352909   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:15.352829   27901 retry.go:31] will retry after 857.746874ms: waiting for machine to come up
	I0729 01:04:16.212356   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:16.212882   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:16.212908   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:16.212819   27901 retry.go:31] will retry after 1.465191331s: waiting for machine to come up
	I0729 01:04:17.679291   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:17.679628   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:17.679650   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:17.679594   27901 retry.go:31] will retry after 1.514834108s: waiting for machine to come up
	I0729 01:04:19.196241   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:19.196710   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:19.196739   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:19.196671   27901 retry.go:31] will retry after 1.789332149s: waiting for machine to come up
	I0729 01:04:20.987779   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:20.988128   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:20.988159   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:20.988100   27901 retry.go:31] will retry after 1.88591588s: waiting for machine to come up
	I0729 01:04:22.875421   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:22.875995   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:22.876037   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:22.875919   27901 retry.go:31] will retry after 2.781831956s: waiting for machine to come up
	I0729 01:04:25.659223   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:25.659731   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:25.659753   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:25.659692   27901 retry.go:31] will retry after 4.514403237s: waiting for machine to come up
	I0729 01:04:30.179257   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:30.179627   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:30.179669   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:30.179609   27901 retry.go:31] will retry after 3.951493535s: waiting for machine to come up
	I0729 01:04:34.135729   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.136242   27502 main.go:141] libmachine: (ha-845088-m02) Found IP for machine: 192.168.39.68
	I0729 01:04:34.136267   27502 main.go:141] libmachine: (ha-845088-m02) Reserving static IP address...
	I0729 01:04:34.136282   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has current primary IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.136599   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find host DHCP lease matching {name: "ha-845088-m02", mac: "52:54:00:d1:55:54", ip: "192.168.39.68"} in network mk-ha-845088
	I0729 01:04:34.206318   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Getting to WaitForSSH function...
	I0729 01:04:34.206347   27502 main.go:141] libmachine: (ha-845088-m02) Reserved static IP address: 192.168.39.68
	I0729 01:04:34.206360   27502 main.go:141] libmachine: (ha-845088-m02) Waiting for SSH to be available...
	I0729 01:04:34.209076   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.209598   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:34.209625   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.209808   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Using SSH client type: external
	I0729 01:04:34.209833   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa (-rw-------)
	I0729 01:04:34.209861   27502 main.go:141] libmachine: (ha-845088-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 01:04:34.209879   27502 main.go:141] libmachine: (ha-845088-m02) DBG | About to run SSH command:
	I0729 01:04:34.209894   27502 main.go:141] libmachine: (ha-845088-m02) DBG | exit 0
	I0729 01:04:34.335648   27502 main.go:141] libmachine: (ha-845088-m02) DBG | SSH cmd err, output: <nil>: 
	I0729 01:04:34.335865   27502 main.go:141] libmachine: (ha-845088-m02) KVM machine creation complete!
	I0729 01:04:34.336175   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetConfigRaw
	I0729 01:04:34.336689   27502 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:04:34.336879   27502 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:04:34.337010   27502 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 01:04:34.337026   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetState
	I0729 01:04:34.338199   27502 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 01:04:34.338219   27502 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 01:04:34.338228   27502 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 01:04:34.338237   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:34.340382   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.340729   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:34.340754   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.341043   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:34.341277   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:34.341439   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:34.341584   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:34.341769   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:04:34.342006   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 01:04:34.342025   27502 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 01:04:34.446590   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:04:34.446617   27502 main.go:141] libmachine: Detecting the provisioner...
	I0729 01:04:34.446628   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:34.449226   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.449570   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:34.449591   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.449705   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:34.449904   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:34.450093   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:34.450264   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:34.450445   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:04:34.450649   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 01:04:34.450662   27502 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 01:04:34.556017   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 01:04:34.556118   27502 main.go:141] libmachine: found compatible host: buildroot
	I0729 01:04:34.556130   27502 main.go:141] libmachine: Provisioning with buildroot...
	I0729 01:04:34.556141   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetMachineName
	I0729 01:04:34.556413   27502 buildroot.go:166] provisioning hostname "ha-845088-m02"
	I0729 01:04:34.556438   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetMachineName
	I0729 01:04:34.556610   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:34.559430   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.559805   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:34.559832   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.560062   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:34.560261   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:34.560412   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:34.560537   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:34.560678   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:04:34.560890   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 01:04:34.560907   27502 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-845088-m02 && echo "ha-845088-m02" | sudo tee /etc/hostname
	I0729 01:04:34.677351   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-845088-m02
	
	I0729 01:04:34.677380   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:34.680212   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.680548   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:34.680575   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.680755   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:34.680928   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:34.681080   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:34.681209   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:34.681350   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:04:34.681506   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 01:04:34.681522   27502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-845088-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-845088-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-845088-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 01:04:34.792246   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:04:34.792273   27502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 01:04:34.792292   27502 buildroot.go:174] setting up certificates
	I0729 01:04:34.792303   27502 provision.go:84] configureAuth start
	I0729 01:04:34.792315   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetMachineName
	I0729 01:04:34.792569   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetIP
	I0729 01:04:34.795284   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.795671   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:34.795699   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.795824   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:34.797710   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.797967   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:34.797993   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.798095   27502 provision.go:143] copyHostCerts
	I0729 01:04:34.798122   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:04:34.798158   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 01:04:34.798167   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:04:34.798234   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 01:04:34.798301   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:04:34.798318   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 01:04:34.798324   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:04:34.798348   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 01:04:34.798390   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:04:34.798407   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 01:04:34.798413   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:04:34.798434   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 01:04:34.798480   27502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.ha-845088-m02 san=[127.0.0.1 192.168.39.68 ha-845088-m02 localhost minikube]
	I0729 01:04:35.036834   27502 provision.go:177] copyRemoteCerts
	I0729 01:04:35.036891   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 01:04:35.036911   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:35.039512   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.039790   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.039819   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.040005   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:35.040184   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:35.040319   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:35.040421   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa Username:docker}
	I0729 01:04:35.121478   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 01:04:35.121541   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 01:04:35.147403   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 01:04:35.147482   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 01:04:35.171398   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 01:04:35.171458   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 01:04:35.194919   27502 provision.go:87] duration metric: took 402.603951ms to configureAuth
	I0729 01:04:35.194943   27502 buildroot.go:189] setting minikube options for container-runtime
	I0729 01:04:35.195111   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:04:35.195176   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:35.197932   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.198294   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.198322   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.198505   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:35.198686   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:35.198846   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:35.198950   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:35.199139   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:04:35.199314   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 01:04:35.199329   27502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 01:04:35.467956   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 01:04:35.467990   27502 main.go:141] libmachine: Checking connection to Docker...
	I0729 01:04:35.468001   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetURL
	I0729 01:04:35.469282   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Using libvirt version 6000000
	I0729 01:04:35.471402   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.471736   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.471765   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.471923   27502 main.go:141] libmachine: Docker is up and running!
	I0729 01:04:35.471936   27502 main.go:141] libmachine: Reticulating splines...
	I0729 01:04:35.471943   27502 client.go:171] duration metric: took 25.056359047s to LocalClient.Create
	I0729 01:04:35.471961   27502 start.go:167] duration metric: took 25.056408542s to libmachine.API.Create "ha-845088"
	I0729 01:04:35.471974   27502 start.go:293] postStartSetup for "ha-845088-m02" (driver="kvm2")
	I0729 01:04:35.471987   27502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 01:04:35.472009   27502 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:04:35.472220   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 01:04:35.472242   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:35.474192   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.474431   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.474459   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.474570   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:35.474750   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:35.474866   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:35.475045   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa Username:docker}
	I0729 01:04:35.557194   27502 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 01:04:35.561234   27502 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 01:04:35.561256   27502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 01:04:35.561323   27502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 01:04:35.561414   27502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 01:04:35.561424   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /etc/ssl/certs/166232.pem
	I0729 01:04:35.561525   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 01:04:35.570745   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:04:35.593786   27502 start.go:296] duration metric: took 121.798873ms for postStartSetup
	I0729 01:04:35.593836   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetConfigRaw
	I0729 01:04:35.594401   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetIP
	I0729 01:04:35.597013   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.597369   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.597398   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.597589   27502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:04:35.597813   27502 start.go:128] duration metric: took 25.199969681s to createHost
	I0729 01:04:35.597845   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:35.600163   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.600510   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.600536   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.600698   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:35.600881   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:35.601041   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:35.601172   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:35.601350   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:04:35.601538   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 01:04:35.601548   27502 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 01:04:35.707526   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722215075.684293180
	
	I0729 01:04:35.707549   27502 fix.go:216] guest clock: 1722215075.684293180
	I0729 01:04:35.707556   27502 fix.go:229] Guest: 2024-07-29 01:04:35.68429318 +0000 UTC Remote: 2024-07-29 01:04:35.597827637 +0000 UTC m=+83.514117948 (delta=86.465543ms)
	I0729 01:04:35.707570   27502 fix.go:200] guest clock delta is within tolerance: 86.465543ms
	I0729 01:04:35.707575   27502 start.go:83] releasing machines lock for "ha-845088-m02", held for 25.30985305s
	I0729 01:04:35.707595   27502 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:04:35.707845   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetIP
	I0729 01:04:35.710561   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.710961   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.710984   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.713187   27502 out.go:177] * Found network options:
	I0729 01:04:35.714471   27502 out.go:177]   - NO_PROXY=192.168.39.69
	W0729 01:04:35.715649   27502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 01:04:35.715675   27502 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:04:35.716140   27502 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:04:35.716317   27502 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:04:35.716367   27502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 01:04:35.716410   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	W0729 01:04:35.716628   27502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 01:04:35.716681   27502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 01:04:35.716695   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:35.719117   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.719360   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.719521   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.719544   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.719698   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:35.719845   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.719863   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:35.719878   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.720027   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:35.720040   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:35.720196   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:35.720216   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa Username:docker}
	I0729 01:04:35.720330   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:35.720463   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa Username:docker}
	I0729 01:04:35.955395   27502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 01:04:35.961747   27502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 01:04:35.961805   27502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 01:04:35.978705   27502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 01:04:35.978725   27502 start.go:495] detecting cgroup driver to use...
	I0729 01:04:35.978788   27502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 01:04:35.995273   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 01:04:36.010704   27502 docker.go:217] disabling cri-docker service (if available) ...
	I0729 01:04:36.010758   27502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 01:04:36.026154   27502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 01:04:36.040175   27502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 01:04:36.165262   27502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 01:04:36.299726   27502 docker.go:233] disabling docker service ...
	I0729 01:04:36.299803   27502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 01:04:36.314101   27502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 01:04:36.327248   27502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 01:04:36.456152   27502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 01:04:36.577668   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 01:04:36.591512   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 01:04:36.610337   27502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 01:04:36.610404   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:04:36.620949   27502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 01:04:36.621005   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:04:36.632188   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:04:36.642694   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:04:36.653500   27502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 01:04:36.664444   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:04:36.674944   27502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:04:36.694016   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:04:36.704389   27502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 01:04:36.713960   27502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 01:04:36.714007   27502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 01:04:36.727754   27502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 01:04:36.737464   27502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:04:36.859547   27502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 01:04:36.996403   27502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 01:04:36.996499   27502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 01:04:37.001249   27502 start.go:563] Will wait 60s for crictl version
	I0729 01:04:37.001303   27502 ssh_runner.go:195] Run: which crictl
	I0729 01:04:37.005610   27502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 01:04:37.045547   27502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 01:04:37.045627   27502 ssh_runner.go:195] Run: crio --version
	I0729 01:04:37.074592   27502 ssh_runner.go:195] Run: crio --version
	I0729 01:04:37.102962   27502 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 01:04:37.104421   27502 out.go:177]   - env NO_PROXY=192.168.39.69
	I0729 01:04:37.105582   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetIP
	I0729 01:04:37.107871   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:37.108222   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:37.108248   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:37.108402   27502 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 01:04:37.112665   27502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 01:04:37.125685   27502 mustload.go:65] Loading cluster: ha-845088
	I0729 01:04:37.125881   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:04:37.126169   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:37.126203   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:37.140417   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43583
	I0729 01:04:37.140801   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:37.141198   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:37.141218   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:37.141588   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:37.141762   27502 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:04:37.143494   27502 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:04:37.143750   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:37.143771   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:37.158014   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41759
	I0729 01:04:37.158521   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:37.158966   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:37.158985   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:37.159254   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:37.159435   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:04:37.159573   27502 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088 for IP: 192.168.39.68
	I0729 01:04:37.159585   27502 certs.go:194] generating shared ca certs ...
	I0729 01:04:37.159602   27502 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:04:37.159714   27502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 01:04:37.159751   27502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 01:04:37.159759   27502 certs.go:256] generating profile certs ...
	I0729 01:04:37.159831   27502 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key
	I0729 01:04:37.159855   27502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.a064a713
	I0729 01:04:37.159869   27502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.a064a713 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.68 192.168.39.254]
	I0729 01:04:37.366318   27502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.a064a713 ...
	I0729 01:04:37.366347   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.a064a713: {Name:mkb24bcdc8ee02409df18eff5a4bc131d770117c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:04:37.366509   27502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.a064a713 ...
	I0729 01:04:37.366523   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.a064a713: {Name:mkd96f5a1ff15a4d77eca684ce230f7e1fbf5165 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:04:37.366588   27502 certs.go:381] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.a064a713 -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt
	I0729 01:04:37.366714   27502 certs.go:385] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.a064a713 -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key
	I0729 01:04:37.366841   27502 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key
	I0729 01:04:37.366855   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 01:04:37.366868   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 01:04:37.366880   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 01:04:37.366892   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 01:04:37.366902   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 01:04:37.366912   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 01:04:37.366924   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 01:04:37.366933   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 01:04:37.366979   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 01:04:37.367011   27502 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 01:04:37.367020   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 01:04:37.367041   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 01:04:37.367095   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 01:04:37.367121   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 01:04:37.367160   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:04:37.367186   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:04:37.367200   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem -> /usr/share/ca-certificates/16623.pem
	I0729 01:04:37.367211   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /usr/share/ca-certificates/166232.pem
	I0729 01:04:37.367240   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:04:37.369968   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:04:37.370348   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:04:37.370376   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:04:37.370525   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:04:37.370724   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:04:37.370875   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:04:37.371022   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:04:37.443519   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 01:04:37.449169   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 01:04:37.460468   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 01:04:37.465134   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0729 01:04:37.475154   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 01:04:37.480107   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 01:04:37.489975   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 01:04:37.494197   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 01:04:37.503857   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 01:04:37.507993   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 01:04:37.517672   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 01:04:37.521671   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 01:04:37.531381   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 01:04:37.556947   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 01:04:37.582241   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 01:04:37.606378   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 01:04:37.630339   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 01:04:37.654670   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 01:04:37.679162   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 01:04:37.702464   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 01:04:37.725172   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 01:04:37.747400   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 01:04:37.770381   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 01:04:37.794773   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 01:04:37.811117   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0729 01:04:37.828998   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 01:04:37.845558   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 01:04:37.862898   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 01:04:37.880723   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 01:04:37.898535   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 01:04:37.916403   27502 ssh_runner.go:195] Run: openssl version
	I0729 01:04:37.922573   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 01:04:37.933193   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:04:37.937486   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:04:37.937526   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:04:37.943480   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 01:04:37.953945   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 01:04:37.964326   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 01:04:37.969058   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 01:04:37.969114   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 01:04:37.974785   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 01:04:37.985091   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 01:04:37.995496   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 01:04:37.999794   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 01:04:37.999843   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 01:04:38.005263   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 01:04:38.015719   27502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 01:04:38.019698   27502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 01:04:38.019753   27502 kubeadm.go:934] updating node {m02 192.168.39.68 8443 v1.30.3 crio true true} ...
	I0729 01:04:38.019860   27502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-845088-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 01:04:38.019888   27502 kube-vip.go:115] generating kube-vip config ...
	I0729 01:04:38.019916   27502 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 01:04:38.036187   27502 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 01:04:38.036242   27502 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 01:04:38.036296   27502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 01:04:38.045421   27502 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 01:04:38.045477   27502 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 01:04:38.054460   27502 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 01:04:38.054487   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 01:04:38.054529   27502 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0729 01:04:38.054560   27502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 01:04:38.054563   27502 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0729 01:04:38.058519   27502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 01:04:38.058543   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 01:04:44.238102   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 01:04:44.238176   27502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 01:04:44.243414   27502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 01:04:44.243449   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 01:04:52.957119   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:04:52.972545   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 01:04:52.972661   27502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 01:04:52.977067   27502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 01:04:52.977105   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 01:04:53.394975   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 01:04:53.404083   27502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 01:04:53.421595   27502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 01:04:53.438149   27502 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 01:04:53.455744   27502 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 01:04:53.459918   27502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 01:04:53.473295   27502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:04:53.615633   27502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:04:53.634677   27502 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:04:53.635164   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:53.635255   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:53.649859   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39703
	I0729 01:04:53.650272   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:53.650691   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:53.650712   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:53.651022   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:53.651200   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:04:53.651345   27502 start.go:317] joinCluster: &{Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:04:53.651456   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 01:04:53.651477   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:04:53.654468   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:04:53.654872   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:04:53.654910   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:04:53.655011   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:04:53.655191   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:04:53.655358   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:04:53.655488   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:04:53.818865   27502 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:04:53.818917   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x9fpyd.eiwyuo54sxlezpb0 --discovery-token-ca-cert-hash sha256:2259b3e93c5dd9b5daf5a1af8e350826f214305256ac858c5baa518ad685cc90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-845088-m02 --control-plane --apiserver-advertise-address=192.168.39.68 --apiserver-bind-port=8443"
	I0729 01:05:15.717189   27502 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x9fpyd.eiwyuo54sxlezpb0 --discovery-token-ca-cert-hash sha256:2259b3e93c5dd9b5daf5a1af8e350826f214305256ac858c5baa518ad685cc90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-845088-m02 --control-plane --apiserver-advertise-address=192.168.39.68 --apiserver-bind-port=8443": (21.898248177s)
	I0729 01:05:15.717229   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 01:05:16.198226   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-845088-m02 minikube.k8s.io/updated_at=2024_07_29T01_05_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=ha-845088 minikube.k8s.io/primary=false
	I0729 01:05:16.348443   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-845088-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 01:05:16.496787   27502 start.go:319] duration metric: took 22.845446497s to joinCluster
	I0729 01:05:16.496888   27502 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:05:16.497205   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:05:16.498446   27502 out.go:177] * Verifying Kubernetes components...
	I0729 01:05:16.499848   27502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:05:16.736731   27502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:05:16.792115   27502 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:05:16.792330   27502 kapi.go:59] client config for ha-845088: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key", CAFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 01:05:16.792382   27502 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.69:8443
	I0729 01:05:16.792556   27502 node_ready.go:35] waiting up to 6m0s for node "ha-845088-m02" to be "Ready" ...
	I0729 01:05:16.792631   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:16.792638   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:16.792646   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:16.792653   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:16.805176   27502 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0729 01:05:17.293652   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:17.293677   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:17.293686   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:17.293691   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:17.298666   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:05:17.792894   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:17.792915   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:17.792923   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:17.792927   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:17.797632   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:05:18.292859   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:18.292882   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:18.292903   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:18.292916   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:18.298491   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:05:18.792761   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:18.792793   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:18.792803   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:18.792809   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:18.796706   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:18.797543   27502 node_ready.go:53] node "ha-845088-m02" has status "Ready":"False"
	I0729 01:05:19.293806   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:19.293833   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:19.293841   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:19.293847   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:19.296836   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:05:19.793097   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:19.793132   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:19.793140   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:19.793145   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:19.796615   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:20.292813   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:20.292835   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:20.292847   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:20.292854   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:20.296002   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:20.792927   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:20.792954   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:20.792966   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:20.792971   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:20.796438   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:21.293457   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:21.293478   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:21.293485   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:21.293488   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:21.297416   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:21.298137   27502 node_ready.go:53] node "ha-845088-m02" has status "Ready":"False"
	I0729 01:05:21.793678   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:21.793701   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:21.793713   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:21.793722   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:21.797251   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:22.292758   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:22.292778   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:22.292788   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:22.292794   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:22.296502   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:22.793213   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:22.793234   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:22.793242   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:22.793246   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:22.796581   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:23.293534   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:23.293557   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:23.293565   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:23.293569   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:23.333422   27502 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I0729 01:05:23.334323   27502 node_ready.go:53] node "ha-845088-m02" has status "Ready":"False"
	I0729 01:05:23.792777   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:23.792803   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:23.792816   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:23.792822   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:23.796119   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:24.293257   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:24.293283   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:24.293293   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:24.293299   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:24.297444   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:05:24.793639   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:24.793660   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:24.793668   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:24.793672   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:24.797083   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:25.293390   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:25.293417   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:25.293429   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:25.293435   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:25.296744   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:25.793336   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:25.793360   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:25.793369   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:25.793376   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:25.796757   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:25.797450   27502 node_ready.go:53] node "ha-845088-m02" has status "Ready":"False"
	I0729 01:05:26.292834   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:26.292856   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:26.292864   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:26.292867   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:26.297284   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:05:26.792729   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:26.792751   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:26.792759   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:26.792763   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:26.796484   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:27.293422   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:27.293441   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:27.293449   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:27.293453   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:27.296541   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:27.793726   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:27.793750   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:27.793760   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:27.793766   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:27.796949   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:27.797522   27502 node_ready.go:53] node "ha-845088-m02" has status "Ready":"False"
	I0729 01:05:28.292921   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:28.292940   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:28.292949   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:28.292954   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:28.297106   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:05:28.792682   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:28.792716   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:28.792724   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:28.792728   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:28.796177   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:29.293024   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:29.293045   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:29.293053   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:29.293058   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:29.296525   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:29.792713   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:29.792733   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:29.792742   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:29.792748   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:29.796602   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:30.293344   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:30.293366   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:30.293374   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:30.293379   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:30.296981   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:30.297733   27502 node_ready.go:53] node "ha-845088-m02" has status "Ready":"False"
	I0729 01:05:30.793625   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:30.793648   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:30.793656   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:30.793660   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:30.796833   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:31.292846   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:31.292876   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:31.292887   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:31.292892   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:31.296198   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:31.793426   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:31.793449   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:31.793456   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:31.793459   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:31.796583   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:32.293141   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:32.293168   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:32.293178   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:32.293184   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:32.296333   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:32.793529   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:32.793554   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:32.793562   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:32.793566   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:32.797264   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:32.797782   27502 node_ready.go:53] node "ha-845088-m02" has status "Ready":"False"
	I0729 01:05:33.293165   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:33.293184   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:33.293193   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:33.293196   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:33.298367   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:05:33.793732   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:33.793753   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:33.793761   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:33.793766   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:33.797430   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:34.293421   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:34.293442   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:34.293450   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:34.293455   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:34.296911   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:34.793537   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:34.793559   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:34.793567   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:34.793570   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:34.796836   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:35.293492   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:35.293519   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.293529   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.293532   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.297077   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:35.298065   27502 node_ready.go:49] node "ha-845088-m02" has status "Ready":"True"
	I0729 01:05:35.298094   27502 node_ready.go:38] duration metric: took 18.50551754s for node "ha-845088-m02" to be "Ready" ...
	I0729 01:05:35.298105   27502 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 01:05:35.298175   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:05:35.298189   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.298199   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.298206   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.302895   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:05:35.308915   27502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-26phs" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.308984   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-26phs
	I0729 01:05:35.308989   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.308999   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.309006   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.312730   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:35.313304   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:35.313321   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.313328   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.313333   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.315704   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:05:35.316180   27502 pod_ready.go:92] pod "coredns-7db6d8ff4d-26phs" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:35.316205   27502 pod_ready.go:81] duration metric: took 7.266995ms for pod "coredns-7db6d8ff4d-26phs" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.316218   27502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x4jjj" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.316269   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x4jjj
	I0729 01:05:35.316277   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.316283   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.316288   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.318653   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:05:35.319250   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:35.319262   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.319268   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.319273   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.321625   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:05:35.322017   27502 pod_ready.go:92] pod "coredns-7db6d8ff4d-x4jjj" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:35.322031   27502 pod_ready.go:81] duration metric: took 5.802907ms for pod "coredns-7db6d8ff4d-x4jjj" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.322041   27502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.322090   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-845088
	I0729 01:05:35.322099   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.322109   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.322112   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.324190   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:05:35.324643   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:35.324657   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.324666   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.324673   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.326735   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:05:35.327246   27502 pod_ready.go:92] pod "etcd-ha-845088" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:35.327260   27502 pod_ready.go:81] duration metric: took 5.212634ms for pod "etcd-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.327267   27502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.327310   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-845088-m02
	I0729 01:05:35.327320   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.327328   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.327333   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.329466   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:05:35.329979   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:35.329991   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.329997   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.330002   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.331992   27502 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 01:05:35.332520   27502 pod_ready.go:92] pod "etcd-ha-845088-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:35.332535   27502 pod_ready.go:81] duration metric: took 5.262005ms for pod "etcd-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.332550   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.494011   27502 request.go:629] Waited for 161.401722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088
	I0729 01:05:35.494091   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088
	I0729 01:05:35.494098   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.494109   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.494118   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.497476   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:35.694573   27502 request.go:629] Waited for 196.374942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:35.694635   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:35.694647   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.694657   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.694664   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.697739   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:35.698301   27502 pod_ready.go:92] pod "kube-apiserver-ha-845088" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:35.698316   27502 pod_ready.go:81] duration metric: took 365.759555ms for pod "kube-apiserver-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.698324   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.894470   27502 request.go:629] Waited for 196.093447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088-m02
	I0729 01:05:35.894558   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088-m02
	I0729 01:05:35.894566   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.894575   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.894580   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.898260   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:36.094217   27502 request.go:629] Waited for 195.390243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:36.094272   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:36.094276   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:36.094284   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:36.094288   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:36.098180   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:36.098826   27502 pod_ready.go:92] pod "kube-apiserver-ha-845088-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:36.098843   27502 pod_ready.go:81] duration metric: took 400.512447ms for pod "kube-apiserver-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:36.098853   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:36.293982   27502 request.go:629] Waited for 195.040587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088
	I0729 01:05:36.294048   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088
	I0729 01:05:36.294053   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:36.294060   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:36.294064   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:36.297270   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:36.494332   27502 request.go:629] Waited for 196.384953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:36.494403   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:36.494412   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:36.494420   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:36.494427   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:36.498075   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:36.498686   27502 pod_ready.go:92] pod "kube-controller-manager-ha-845088" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:36.498705   27502 pod_ready.go:81] duration metric: took 399.843879ms for pod "kube-controller-manager-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:36.498714   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:36.694113   27502 request.go:629] Waited for 195.327672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088-m02
	I0729 01:05:36.694179   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088-m02
	I0729 01:05:36.694186   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:36.694196   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:36.694203   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:36.698055   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:36.894263   27502 request.go:629] Waited for 195.357228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:36.894322   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:36.894328   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:36.894339   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:36.894346   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:36.897615   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:36.898256   27502 pod_ready.go:92] pod "kube-controller-manager-ha-845088-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:36.898274   27502 pod_ready.go:81] duration metric: took 399.5534ms for pod "kube-controller-manager-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:36.898284   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j6gxl" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:37.094411   27502 request.go:629] Waited for 196.068776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6gxl
	I0729 01:05:37.094486   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6gxl
	I0729 01:05:37.094504   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:37.094516   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:37.094522   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:37.098653   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:05:37.293857   27502 request.go:629] Waited for 194.600226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:37.294013   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:37.294039   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:37.294052   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:37.294059   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:37.297322   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:37.297711   27502 pod_ready.go:92] pod "kube-proxy-j6gxl" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:37.297728   27502 pod_ready.go:81] duration metric: took 399.435925ms for pod "kube-proxy-j6gxl" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:37.297738   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tmzt7" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:37.493965   27502 request.go:629] Waited for 196.14423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmzt7
	I0729 01:05:37.494056   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmzt7
	I0729 01:05:37.494063   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:37.494073   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:37.494081   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:37.500693   27502 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 01:05:37.694584   27502 request.go:629] Waited for 192.391917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:37.694678   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:37.694689   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:37.694705   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:37.694717   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:37.698804   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:05:37.699305   27502 pod_ready.go:92] pod "kube-proxy-tmzt7" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:37.699325   27502 pod_ready.go:81] duration metric: took 401.579876ms for pod "kube-proxy-tmzt7" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:37.699334   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:37.894464   27502 request.go:629] Waited for 195.060285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088
	I0729 01:05:37.894528   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088
	I0729 01:05:37.894535   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:37.894548   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:37.894553   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:37.897748   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:38.093740   27502 request.go:629] Waited for 195.304241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:38.093810   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:38.093821   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:38.093833   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:38.093839   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:38.097181   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:38.097971   27502 pod_ready.go:92] pod "kube-scheduler-ha-845088" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:38.097989   27502 pod_ready.go:81] duration metric: took 398.647856ms for pod "kube-scheduler-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:38.097999   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:38.294178   27502 request.go:629] Waited for 196.110447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088-m02
	I0729 01:05:38.294252   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088-m02
	I0729 01:05:38.294259   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:38.294269   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:38.294278   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:38.297823   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:38.493896   27502 request.go:629] Waited for 195.394372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:38.493952   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:38.493959   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:38.493966   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:38.493973   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:38.496904   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:05:38.497673   27502 pod_ready.go:92] pod "kube-scheduler-ha-845088-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:38.497689   27502 pod_ready.go:81] duration metric: took 399.683512ms for pod "kube-scheduler-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:38.497699   27502 pod_ready.go:38] duration metric: took 3.199579282s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 01:05:38.497716   27502 api_server.go:52] waiting for apiserver process to appear ...
	I0729 01:05:38.497765   27502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:05:38.514296   27502 api_server.go:72] duration metric: took 22.017368118s to wait for apiserver process to appear ...
	I0729 01:05:38.514316   27502 api_server.go:88] waiting for apiserver healthz status ...
	I0729 01:05:38.514331   27502 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I0729 01:05:38.518520   27502 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I0729 01:05:38.518582   27502 round_trippers.go:463] GET https://192.168.39.69:8443/version
	I0729 01:05:38.518591   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:38.518601   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:38.518611   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:38.519521   27502 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 01:05:38.519618   27502 api_server.go:141] control plane version: v1.30.3
	I0729 01:05:38.519634   27502 api_server.go:131] duration metric: took 5.313497ms to wait for apiserver health ...
	I0729 01:05:38.519642   27502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 01:05:38.694128   27502 request.go:629] Waited for 174.41718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:05:38.694189   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:05:38.694196   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:38.694204   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:38.694211   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:38.699228   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:05:38.704674   27502 system_pods.go:59] 17 kube-system pods found
	I0729 01:05:38.704697   27502 system_pods.go:61] "coredns-7db6d8ff4d-26phs" [0fa00166-935c-4e30-899d-0ae105083984] Running
	I0729 01:05:38.704703   27502 system_pods.go:61] "coredns-7db6d8ff4d-x4jjj" [659a9fc3-a597-401d-9ceb-71a04f049d8c] Running
	I0729 01:05:38.704706   27502 system_pods.go:61] "etcd-ha-845088" [eb889e81-3ece-4af1-8bce-9c3740e8209c] Running
	I0729 01:05:38.704710   27502 system_pods.go:61] "etcd-ha-845088-m02" [e1bd96c5-3618-4f17-aa55-4a0c227cb401] Running
	I0729 01:05:38.704714   27502 system_pods.go:61] "kindnet-jz7gr" [3d184fd2-5bfc-40bd-b7b3-98934d58a689] Running
	I0729 01:05:38.704717   27502 system_pods.go:61] "kindnet-p87gx" [07b16da9-2b6f-45b8-b9a4-0009e6d60925] Running
	I0729 01:05:38.704723   27502 system_pods.go:61] "kube-apiserver-ha-845088" [1fe50c6b-6497-498e-8f2a-c84c3dabdbb3] Running
	I0729 01:05:38.704726   27502 system_pods.go:61] "kube-apiserver-ha-845088-m02" [d7fef5ee-2f47-4b3b-b625-f146578f3164] Running
	I0729 01:05:38.704730   27502 system_pods.go:61] "kube-controller-manager-ha-845088" [e58772fb-6dcd-431c-ba7b-cf726504c97e] Running
	I0729 01:05:38.704733   27502 system_pods.go:61] "kube-controller-manager-ha-845088-m02" [e8811503-c081-430f-9191-e1cf1fa1a866] Running
	I0729 01:05:38.704736   27502 system_pods.go:61] "kube-proxy-j6gxl" [45f77cb8-2b41-4069-8468-6defe7e0f51e] Running
	I0729 01:05:38.704740   27502 system_pods.go:61] "kube-proxy-tmzt7" [f2e92bb0-87c0-4d4e-ae34-d67970a61dc9] Running
	I0729 01:05:38.704744   27502 system_pods.go:61] "kube-scheduler-ha-845088" [8dd2df88-eb98-4220-a7f5-fe78bd302573] Running
	I0729 01:05:38.704747   27502 system_pods.go:61] "kube-scheduler-ha-845088-m02" [ca68c56a-ffbe-43be-b452-bd6bd7c508ba] Running
	I0729 01:05:38.704749   27502 system_pods.go:61] "kube-vip-ha-845088" [23429e30-003b-4bf2-9ab0-fb4d2a2ee5c8] Running
	I0729 01:05:38.704752   27502 system_pods.go:61] "kube-vip-ha-845088-m02" [4716aa15-53c6-4f56-98a4-1b0697bb355d] Running
	I0729 01:05:38.704755   27502 system_pods.go:61] "storage-provisioner" [9b770bc2-7368-4b86-89ff-399d60f17817] Running
	I0729 01:05:38.704761   27502 system_pods.go:74] duration metric: took 185.111935ms to wait for pod list to return data ...
	I0729 01:05:38.704769   27502 default_sa.go:34] waiting for default service account to be created ...
	I0729 01:05:38.894055   27502 request.go:629] Waited for 189.221463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I0729 01:05:38.894118   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I0729 01:05:38.894125   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:38.894134   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:38.894143   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:38.897226   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:38.897414   27502 default_sa.go:45] found service account: "default"
	I0729 01:05:38.897428   27502 default_sa.go:55] duration metric: took 192.65029ms for default service account to be created ...
	I0729 01:05:38.897435   27502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 01:05:39.093698   27502 request.go:629] Waited for 196.210309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:05:39.093764   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:05:39.093771   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:39.093780   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:39.093789   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:39.099136   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:05:39.103860   27502 system_pods.go:86] 17 kube-system pods found
	I0729 01:05:39.103883   27502 system_pods.go:89] "coredns-7db6d8ff4d-26phs" [0fa00166-935c-4e30-899d-0ae105083984] Running
	I0729 01:05:39.103887   27502 system_pods.go:89] "coredns-7db6d8ff4d-x4jjj" [659a9fc3-a597-401d-9ceb-71a04f049d8c] Running
	I0729 01:05:39.103891   27502 system_pods.go:89] "etcd-ha-845088" [eb889e81-3ece-4af1-8bce-9c3740e8209c] Running
	I0729 01:05:39.103895   27502 system_pods.go:89] "etcd-ha-845088-m02" [e1bd96c5-3618-4f17-aa55-4a0c227cb401] Running
	I0729 01:05:39.103899   27502 system_pods.go:89] "kindnet-jz7gr" [3d184fd2-5bfc-40bd-b7b3-98934d58a689] Running
	I0729 01:05:39.103903   27502 system_pods.go:89] "kindnet-p87gx" [07b16da9-2b6f-45b8-b9a4-0009e6d60925] Running
	I0729 01:05:39.103906   27502 system_pods.go:89] "kube-apiserver-ha-845088" [1fe50c6b-6497-498e-8f2a-c84c3dabdbb3] Running
	I0729 01:05:39.103911   27502 system_pods.go:89] "kube-apiserver-ha-845088-m02" [d7fef5ee-2f47-4b3b-b625-f146578f3164] Running
	I0729 01:05:39.103915   27502 system_pods.go:89] "kube-controller-manager-ha-845088" [e58772fb-6dcd-431c-ba7b-cf726504c97e] Running
	I0729 01:05:39.103919   27502 system_pods.go:89] "kube-controller-manager-ha-845088-m02" [e8811503-c081-430f-9191-e1cf1fa1a866] Running
	I0729 01:05:39.103923   27502 system_pods.go:89] "kube-proxy-j6gxl" [45f77cb8-2b41-4069-8468-6defe7e0f51e] Running
	I0729 01:05:39.103929   27502 system_pods.go:89] "kube-proxy-tmzt7" [f2e92bb0-87c0-4d4e-ae34-d67970a61dc9] Running
	I0729 01:05:39.103933   27502 system_pods.go:89] "kube-scheduler-ha-845088" [8dd2df88-eb98-4220-a7f5-fe78bd302573] Running
	I0729 01:05:39.103936   27502 system_pods.go:89] "kube-scheduler-ha-845088-m02" [ca68c56a-ffbe-43be-b452-bd6bd7c508ba] Running
	I0729 01:05:39.103939   27502 system_pods.go:89] "kube-vip-ha-845088" [23429e30-003b-4bf2-9ab0-fb4d2a2ee5c8] Running
	I0729 01:05:39.103943   27502 system_pods.go:89] "kube-vip-ha-845088-m02" [4716aa15-53c6-4f56-98a4-1b0697bb355d] Running
	I0729 01:05:39.103947   27502 system_pods.go:89] "storage-provisioner" [9b770bc2-7368-4b86-89ff-399d60f17817] Running
	I0729 01:05:39.103954   27502 system_pods.go:126] duration metric: took 206.514725ms to wait for k8s-apps to be running ...
	I0729 01:05:39.103961   27502 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 01:05:39.104003   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:05:39.119401   27502 system_svc.go:56] duration metric: took 15.427514ms WaitForService to wait for kubelet
	I0729 01:05:39.119424   27502 kubeadm.go:582] duration metric: took 22.62250259s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 01:05:39.119441   27502 node_conditions.go:102] verifying NodePressure condition ...
	I0729 01:05:39.293824   27502 request.go:629] Waited for 174.318053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes
	I0729 01:05:39.293905   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes
	I0729 01:05:39.293913   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:39.293924   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:39.293935   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:39.297453   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:39.298167   27502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 01:05:39.298186   27502 node_conditions.go:123] node cpu capacity is 2
	I0729 01:05:39.298195   27502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 01:05:39.298199   27502 node_conditions.go:123] node cpu capacity is 2
	I0729 01:05:39.298203   27502 node_conditions.go:105] duration metric: took 178.757884ms to run NodePressure ...
	I0729 01:05:39.298213   27502 start.go:241] waiting for startup goroutines ...
	I0729 01:05:39.298234   27502 start.go:255] writing updated cluster config ...
	I0729 01:05:39.300330   27502 out.go:177] 
	I0729 01:05:39.301837   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:05:39.301939   27502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:05:39.303782   27502 out.go:177] * Starting "ha-845088-m03" control-plane node in "ha-845088" cluster
	I0729 01:05:39.305172   27502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:05:39.305192   27502 cache.go:56] Caching tarball of preloaded images
	I0729 01:05:39.305285   27502 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 01:05:39.305295   27502 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 01:05:39.305374   27502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:05:39.305518   27502 start.go:360] acquireMachinesLock for ha-845088-m03: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 01:05:39.305556   27502 start.go:364] duration metric: took 20.255µs to acquireMachinesLock for "ha-845088-m03"
	I0729 01:05:39.305574   27502 start.go:93] Provisioning new machine with config: &{Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:05:39.305660   27502 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0729 01:05:39.307190   27502 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 01:05:39.307257   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:05:39.307287   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:05:39.324326   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39675
	I0729 01:05:39.324740   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:05:39.325176   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:05:39.325195   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:05:39.325498   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:05:39.325670   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetMachineName
	I0729 01:05:39.325810   27502 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:05:39.325964   27502 start.go:159] libmachine.API.Create for "ha-845088" (driver="kvm2")
	I0729 01:05:39.325991   27502 client.go:168] LocalClient.Create starting
	I0729 01:05:39.326025   27502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem
	I0729 01:05:39.326065   27502 main.go:141] libmachine: Decoding PEM data...
	I0729 01:05:39.326083   27502 main.go:141] libmachine: Parsing certificate...
	I0729 01:05:39.326149   27502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem
	I0729 01:05:39.326176   27502 main.go:141] libmachine: Decoding PEM data...
	I0729 01:05:39.326192   27502 main.go:141] libmachine: Parsing certificate...
	I0729 01:05:39.326218   27502 main.go:141] libmachine: Running pre-create checks...
	I0729 01:05:39.326230   27502 main.go:141] libmachine: (ha-845088-m03) Calling .PreCreateCheck
	I0729 01:05:39.326386   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetConfigRaw
	I0729 01:05:39.326737   27502 main.go:141] libmachine: Creating machine...
	I0729 01:05:39.326750   27502 main.go:141] libmachine: (ha-845088-m03) Calling .Create
	I0729 01:05:39.326876   27502 main.go:141] libmachine: (ha-845088-m03) Creating KVM machine...
	I0729 01:05:39.328256   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found existing default KVM network
	I0729 01:05:39.328414   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found existing private KVM network mk-ha-845088
	I0729 01:05:39.328543   27502 main.go:141] libmachine: (ha-845088-m03) Setting up store path in /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03 ...
	I0729 01:05:39.328569   27502 main.go:141] libmachine: (ha-845088-m03) Building disk image from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 01:05:39.328641   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:39.328542   28354 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:05:39.328790   27502 main.go:141] libmachine: (ha-845088-m03) Downloading /home/jenkins/minikube-integration/19312-9421/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 01:05:39.581441   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:39.581321   28354 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa...
	I0729 01:05:39.873658   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:39.873558   28354 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/ha-845088-m03.rawdisk...
	I0729 01:05:39.873687   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Writing magic tar header
	I0729 01:05:39.873702   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Writing SSH key tar header
	I0729 01:05:39.873712   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:39.873660   28354 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03 ...
	I0729 01:05:39.873826   27502 main.go:141] libmachine: (ha-845088-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03 (perms=drwx------)
	I0729 01:05:39.873857   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03
	I0729 01:05:39.873865   27502 main.go:141] libmachine: (ha-845088-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines (perms=drwxr-xr-x)
	I0729 01:05:39.873873   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines
	I0729 01:05:39.873880   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:05:39.873889   27502 main.go:141] libmachine: (ha-845088-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube (perms=drwxr-xr-x)
	I0729 01:05:39.873897   27502 main.go:141] libmachine: (ha-845088-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421 (perms=drwxrwxr-x)
	I0729 01:05:39.873905   27502 main.go:141] libmachine: (ha-845088-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 01:05:39.873912   27502 main.go:141] libmachine: (ha-845088-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 01:05:39.873918   27502 main.go:141] libmachine: (ha-845088-m03) Creating domain...
	I0729 01:05:39.873924   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421
	I0729 01:05:39.873944   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 01:05:39.873965   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Checking permissions on dir: /home/jenkins
	I0729 01:05:39.873978   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Checking permissions on dir: /home
	I0729 01:05:39.873986   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Skipping /home - not owner
	I0729 01:05:39.874850   27502 main.go:141] libmachine: (ha-845088-m03) define libvirt domain using xml: 
	I0729 01:05:39.874872   27502 main.go:141] libmachine: (ha-845088-m03) <domain type='kvm'>
	I0729 01:05:39.874882   27502 main.go:141] libmachine: (ha-845088-m03)   <name>ha-845088-m03</name>
	I0729 01:05:39.874893   27502 main.go:141] libmachine: (ha-845088-m03)   <memory unit='MiB'>2200</memory>
	I0729 01:05:39.874905   27502 main.go:141] libmachine: (ha-845088-m03)   <vcpu>2</vcpu>
	I0729 01:05:39.874916   27502 main.go:141] libmachine: (ha-845088-m03)   <features>
	I0729 01:05:39.874925   27502 main.go:141] libmachine: (ha-845088-m03)     <acpi/>
	I0729 01:05:39.874934   27502 main.go:141] libmachine: (ha-845088-m03)     <apic/>
	I0729 01:05:39.874943   27502 main.go:141] libmachine: (ha-845088-m03)     <pae/>
	I0729 01:05:39.874949   27502 main.go:141] libmachine: (ha-845088-m03)     
	I0729 01:05:39.874954   27502 main.go:141] libmachine: (ha-845088-m03)   </features>
	I0729 01:05:39.874959   27502 main.go:141] libmachine: (ha-845088-m03)   <cpu mode='host-passthrough'>
	I0729 01:05:39.874964   27502 main.go:141] libmachine: (ha-845088-m03)   
	I0729 01:05:39.874974   27502 main.go:141] libmachine: (ha-845088-m03)   </cpu>
	I0729 01:05:39.874980   27502 main.go:141] libmachine: (ha-845088-m03)   <os>
	I0729 01:05:39.874989   27502 main.go:141] libmachine: (ha-845088-m03)     <type>hvm</type>
	I0729 01:05:39.875018   27502 main.go:141] libmachine: (ha-845088-m03)     <boot dev='cdrom'/>
	I0729 01:05:39.875037   27502 main.go:141] libmachine: (ha-845088-m03)     <boot dev='hd'/>
	I0729 01:05:39.875051   27502 main.go:141] libmachine: (ha-845088-m03)     <bootmenu enable='no'/>
	I0729 01:05:39.875070   27502 main.go:141] libmachine: (ha-845088-m03)   </os>
	I0729 01:05:39.875081   27502 main.go:141] libmachine: (ha-845088-m03)   <devices>
	I0729 01:05:39.875096   27502 main.go:141] libmachine: (ha-845088-m03)     <disk type='file' device='cdrom'>
	I0729 01:05:39.875115   27502 main.go:141] libmachine: (ha-845088-m03)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/boot2docker.iso'/>
	I0729 01:05:39.875128   27502 main.go:141] libmachine: (ha-845088-m03)       <target dev='hdc' bus='scsi'/>
	I0729 01:05:39.875138   27502 main.go:141] libmachine: (ha-845088-m03)       <readonly/>
	I0729 01:05:39.875148   27502 main.go:141] libmachine: (ha-845088-m03)     </disk>
	I0729 01:05:39.875159   27502 main.go:141] libmachine: (ha-845088-m03)     <disk type='file' device='disk'>
	I0729 01:05:39.875172   27502 main.go:141] libmachine: (ha-845088-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 01:05:39.875196   27502 main.go:141] libmachine: (ha-845088-m03)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/ha-845088-m03.rawdisk'/>
	I0729 01:05:39.875213   27502 main.go:141] libmachine: (ha-845088-m03)       <target dev='hda' bus='virtio'/>
	I0729 01:05:39.875224   27502 main.go:141] libmachine: (ha-845088-m03)     </disk>
	I0729 01:05:39.875234   27502 main.go:141] libmachine: (ha-845088-m03)     <interface type='network'>
	I0729 01:05:39.875246   27502 main.go:141] libmachine: (ha-845088-m03)       <source network='mk-ha-845088'/>
	I0729 01:05:39.875255   27502 main.go:141] libmachine: (ha-845088-m03)       <model type='virtio'/>
	I0729 01:05:39.875262   27502 main.go:141] libmachine: (ha-845088-m03)     </interface>
	I0729 01:05:39.875269   27502 main.go:141] libmachine: (ha-845088-m03)     <interface type='network'>
	I0729 01:05:39.875275   27502 main.go:141] libmachine: (ha-845088-m03)       <source network='default'/>
	I0729 01:05:39.875282   27502 main.go:141] libmachine: (ha-845088-m03)       <model type='virtio'/>
	I0729 01:05:39.875297   27502 main.go:141] libmachine: (ha-845088-m03)     </interface>
	I0729 01:05:39.875313   27502 main.go:141] libmachine: (ha-845088-m03)     <serial type='pty'>
	I0729 01:05:39.875326   27502 main.go:141] libmachine: (ha-845088-m03)       <target port='0'/>
	I0729 01:05:39.875337   27502 main.go:141] libmachine: (ha-845088-m03)     </serial>
	I0729 01:05:39.875350   27502 main.go:141] libmachine: (ha-845088-m03)     <console type='pty'>
	I0729 01:05:39.875361   27502 main.go:141] libmachine: (ha-845088-m03)       <target type='serial' port='0'/>
	I0729 01:05:39.875373   27502 main.go:141] libmachine: (ha-845088-m03)     </console>
	I0729 01:05:39.875387   27502 main.go:141] libmachine: (ha-845088-m03)     <rng model='virtio'>
	I0729 01:05:39.875402   27502 main.go:141] libmachine: (ha-845088-m03)       <backend model='random'>/dev/random</backend>
	I0729 01:05:39.875410   27502 main.go:141] libmachine: (ha-845088-m03)     </rng>
	I0729 01:05:39.875439   27502 main.go:141] libmachine: (ha-845088-m03)     
	I0729 01:05:39.875448   27502 main.go:141] libmachine: (ha-845088-m03)     
	I0729 01:05:39.875491   27502 main.go:141] libmachine: (ha-845088-m03)   </devices>
	I0729 01:05:39.875514   27502 main.go:141] libmachine: (ha-845088-m03) </domain>
	I0729 01:05:39.875526   27502 main.go:141] libmachine: (ha-845088-m03) 
	I0729 01:05:39.882005   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:53:46:2d in network default
	I0729 01:05:39.882531   27502 main.go:141] libmachine: (ha-845088-m03) Ensuring networks are active...
	I0729 01:05:39.882565   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:39.883346   27502 main.go:141] libmachine: (ha-845088-m03) Ensuring network default is active
	I0729 01:05:39.883713   27502 main.go:141] libmachine: (ha-845088-m03) Ensuring network mk-ha-845088 is active
	I0729 01:05:39.884078   27502 main.go:141] libmachine: (ha-845088-m03) Getting domain xml...
	I0729 01:05:39.884758   27502 main.go:141] libmachine: (ha-845088-m03) Creating domain...
	I0729 01:05:41.107959   27502 main.go:141] libmachine: (ha-845088-m03) Waiting to get IP...
	I0729 01:05:41.108667   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:41.109143   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:41.109163   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:41.109114   28354 retry.go:31] will retry after 214.34753ms: waiting for machine to come up
	I0729 01:05:41.325647   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:41.326155   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:41.326184   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:41.326106   28354 retry.go:31] will retry after 375.969123ms: waiting for machine to come up
	I0729 01:05:41.703622   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:41.704053   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:41.704078   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:41.704023   28354 retry.go:31] will retry after 475.943307ms: waiting for machine to come up
	I0729 01:05:42.181142   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:42.181586   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:42.181632   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:42.181563   28354 retry.go:31] will retry after 559.597658ms: waiting for machine to come up
	I0729 01:05:42.742209   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:42.742637   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:42.742667   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:42.742571   28354 retry.go:31] will retry after 635.877296ms: waiting for machine to come up
	I0729 01:05:43.380286   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:43.380759   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:43.380786   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:43.380705   28354 retry.go:31] will retry after 895.342626ms: waiting for machine to come up
	I0729 01:05:44.277705   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:44.278180   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:44.278210   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:44.278127   28354 retry.go:31] will retry after 868.037692ms: waiting for machine to come up
	I0729 01:05:45.148047   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:45.148487   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:45.148517   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:45.148461   28354 retry.go:31] will retry after 998.649569ms: waiting for machine to come up
	I0729 01:05:46.149225   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:46.149646   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:46.149673   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:46.149587   28354 retry.go:31] will retry after 1.731737854s: waiting for machine to come up
	I0729 01:05:47.883017   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:47.883474   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:47.883511   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:47.883405   28354 retry.go:31] will retry after 2.192020926s: waiting for machine to come up
	I0729 01:05:50.077934   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:50.078526   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:50.078555   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:50.078479   28354 retry.go:31] will retry after 2.583552543s: waiting for machine to come up
	I0729 01:05:52.665052   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:52.665437   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:52.665463   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:52.665420   28354 retry.go:31] will retry after 2.260400072s: waiting for machine to come up
	I0729 01:05:54.927407   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:54.927812   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:54.927841   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:54.927768   28354 retry.go:31] will retry after 4.178032033s: waiting for machine to come up
	I0729 01:05:59.110167   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:59.110627   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:59.110658   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:59.110531   28354 retry.go:31] will retry after 4.108724133s: waiting for machine to come up
	I0729 01:06:03.223090   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.223468   27502 main.go:141] libmachine: (ha-845088-m03) Found IP for machine: 192.168.39.243
	I0729 01:06:03.223498   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has current primary IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.223507   27502 main.go:141] libmachine: (ha-845088-m03) Reserving static IP address...
	I0729 01:06:03.223942   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find host DHCP lease matching {name: "ha-845088-m03", mac: "52:54:00:67:6a:ee", ip: "192.168.39.243"} in network mk-ha-845088
	I0729 01:06:03.303198   27502 main.go:141] libmachine: (ha-845088-m03) Reserved static IP address: 192.168.39.243
	I0729 01:06:03.303229   27502 main.go:141] libmachine: (ha-845088-m03) Waiting for SSH to be available...
	I0729 01:06:03.303240   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Getting to WaitForSSH function...
	I0729 01:06:03.306121   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.306568   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:minikube Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:03.306596   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.306694   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Using SSH client type: external
	I0729 01:06:03.306715   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa (-rw-------)
	I0729 01:06:03.306742   27502 main.go:141] libmachine: (ha-845088-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 01:06:03.306754   27502 main.go:141] libmachine: (ha-845088-m03) DBG | About to run SSH command:
	I0729 01:06:03.306772   27502 main.go:141] libmachine: (ha-845088-m03) DBG | exit 0
	I0729 01:06:03.435151   27502 main.go:141] libmachine: (ha-845088-m03) DBG | SSH cmd err, output: <nil>: 
	I0729 01:06:03.435395   27502 main.go:141] libmachine: (ha-845088-m03) KVM machine creation complete!
	I0729 01:06:03.435741   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetConfigRaw
	I0729 01:06:03.436328   27502 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:06:03.436538   27502 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:06:03.436694   27502 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 01:06:03.436711   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetState
	I0729 01:06:03.438008   27502 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 01:06:03.438025   27502 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 01:06:03.438030   27502 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 01:06:03.438036   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:03.440559   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.440962   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:03.440991   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.441177   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:03.441362   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:03.441505   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:03.441610   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:03.441746   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:06:03.441948   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0729 01:06:03.441960   27502 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 01:06:03.558514   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:06:03.558543   27502 main.go:141] libmachine: Detecting the provisioner...
	I0729 01:06:03.558553   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:03.561702   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.562184   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:03.562211   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.562407   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:03.562578   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:03.562747   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:03.562892   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:03.563120   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:06:03.563323   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0729 01:06:03.563336   27502 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 01:06:03.680201   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 01:06:03.680260   27502 main.go:141] libmachine: found compatible host: buildroot
	I0729 01:06:03.680273   27502 main.go:141] libmachine: Provisioning with buildroot...
	I0729 01:06:03.680290   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetMachineName
	I0729 01:06:03.680527   27502 buildroot.go:166] provisioning hostname "ha-845088-m03"
	I0729 01:06:03.680558   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetMachineName
	I0729 01:06:03.680778   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:03.683683   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.684076   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:03.684104   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.684241   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:03.684423   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:03.684588   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:03.684716   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:03.684888   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:06:03.685083   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0729 01:06:03.685095   27502 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-845088-m03 && echo "ha-845088-m03" | sudo tee /etc/hostname
	I0729 01:06:03.811703   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-845088-m03
	
	I0729 01:06:03.811736   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:03.814632   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.815049   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:03.815093   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.815309   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:03.815501   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:03.815669   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:03.815820   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:03.815959   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:06:03.816118   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0729 01:06:03.816133   27502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-845088-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-845088-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-845088-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 01:06:03.938958   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:06:03.938986   27502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 01:06:03.939012   27502 buildroot.go:174] setting up certificates
	I0729 01:06:03.939025   27502 provision.go:84] configureAuth start
	I0729 01:06:03.939045   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetMachineName
	I0729 01:06:03.939363   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetIP
	I0729 01:06:03.942159   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.942561   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:03.942598   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.942760   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:03.945067   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.945393   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:03.945418   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.945599   27502 provision.go:143] copyHostCerts
	I0729 01:06:03.945629   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:06:03.945665   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 01:06:03.945677   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:06:03.945758   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 01:06:03.945860   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:06:03.945884   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 01:06:03.945892   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:06:03.945931   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 01:06:03.945993   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:06:03.946015   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 01:06:03.946025   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:06:03.946057   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 01:06:03.946129   27502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.ha-845088-m03 san=[127.0.0.1 192.168.39.243 ha-845088-m03 localhost minikube]
	I0729 01:06:04.366831   27502 provision.go:177] copyRemoteCerts
	I0729 01:06:04.366890   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 01:06:04.366912   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:04.369754   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.370177   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:04.370208   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.370466   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:04.370716   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:04.370876   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:04.371026   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:06:04.462183   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 01:06:04.462291   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 01:06:04.487519   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 01:06:04.487584   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 01:06:04.513367   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 01:06:04.513425   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 01:06:04.539048   27502 provision.go:87] duration metric: took 600.004482ms to configureAuth
	I0729 01:06:04.539110   27502 buildroot.go:189] setting minikube options for container-runtime
	I0729 01:06:04.539302   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:06:04.539366   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:04.542002   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.542446   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:04.542473   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.542642   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:04.542924   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:04.543083   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:04.543199   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:04.543379   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:06:04.543535   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0729 01:06:04.543550   27502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 01:06:04.811026   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 01:06:04.811050   27502 main.go:141] libmachine: Checking connection to Docker...
	I0729 01:06:04.811079   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetURL
	I0729 01:06:04.812295   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Using libvirt version 6000000
	I0729 01:06:04.814723   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.815180   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:04.815225   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.815360   27502 main.go:141] libmachine: Docker is up and running!
	I0729 01:06:04.815375   27502 main.go:141] libmachine: Reticulating splines...
	I0729 01:06:04.815382   27502 client.go:171] duration metric: took 25.489382959s to LocalClient.Create
	I0729 01:06:04.815403   27502 start.go:167] duration metric: took 25.48943964s to libmachine.API.Create "ha-845088"
	I0729 01:06:04.815411   27502 start.go:293] postStartSetup for "ha-845088-m03" (driver="kvm2")
	I0729 01:06:04.815420   27502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 01:06:04.815436   27502 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:06:04.815632   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 01:06:04.815655   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:04.818038   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.818468   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:04.818499   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.818610   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:04.818793   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:04.818961   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:04.819114   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:06:04.906380   27502 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 01:06:04.911051   27502 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 01:06:04.911098   27502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 01:06:04.911172   27502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 01:06:04.911266   27502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 01:06:04.911279   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /etc/ssl/certs/166232.pem
	I0729 01:06:04.911382   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 01:06:04.920907   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:06:04.950815   27502 start.go:296] duration metric: took 135.390141ms for postStartSetup
	I0729 01:06:04.950873   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetConfigRaw
	I0729 01:06:04.951586   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetIP
	I0729 01:06:04.954390   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.954798   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:04.954830   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.955091   27502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:06:04.955286   27502 start.go:128] duration metric: took 25.649616647s to createHost
	I0729 01:06:04.955319   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:04.957627   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.957948   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:04.957978   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.958093   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:04.958275   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:04.958437   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:04.958580   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:04.958735   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:06:04.958894   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0729 01:06:04.958903   27502 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 01:06:05.072345   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722215165.049350486
	
	I0729 01:06:05.072371   27502 fix.go:216] guest clock: 1722215165.049350486
	I0729 01:06:05.072378   27502 fix.go:229] Guest: 2024-07-29 01:06:05.049350486 +0000 UTC Remote: 2024-07-29 01:06:04.955297652 +0000 UTC m=+172.871587953 (delta=94.052834ms)
	I0729 01:06:05.072394   27502 fix.go:200] guest clock delta is within tolerance: 94.052834ms
	I0729 01:06:05.072399   27502 start.go:83] releasing machines lock for "ha-845088-m03", held for 25.766834934s
	I0729 01:06:05.072417   27502 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:06:05.072665   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetIP
	I0729 01:06:05.075534   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:05.075917   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:05.075934   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:05.078209   27502 out.go:177] * Found network options:
	I0729 01:06:05.079571   27502 out.go:177]   - NO_PROXY=192.168.39.69,192.168.39.68
	W0729 01:06:05.080720   27502 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 01:06:05.080742   27502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 01:06:05.080755   27502 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:06:05.081231   27502 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:06:05.081408   27502 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:06:05.081498   27502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 01:06:05.081537   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	W0729 01:06:05.081597   27502 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 01:06:05.081617   27502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 01:06:05.081670   27502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 01:06:05.081688   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:05.084428   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:05.084592   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:05.084850   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:05.084875   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:05.085018   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:05.085171   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:05.085187   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:05.085198   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:05.085339   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:05.085402   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:05.085513   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:05.085588   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:06:05.085661   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:05.085839   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:06:05.319094   27502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 01:06:05.325900   27502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 01:06:05.325961   27502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 01:06:05.343733   27502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 01:06:05.343759   27502 start.go:495] detecting cgroup driver to use...
	I0729 01:06:05.343833   27502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 01:06:05.361972   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 01:06:05.376158   27502 docker.go:217] disabling cri-docker service (if available) ...
	I0729 01:06:05.376212   27502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 01:06:05.390149   27502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 01:06:05.404220   27502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 01:06:05.530056   27502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 01:06:05.668459   27502 docker.go:233] disabling docker service ...
	I0729 01:06:05.668541   27502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 01:06:05.685042   27502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 01:06:05.698627   27502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 01:06:05.833352   27502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 01:06:05.948485   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 01:06:05.967279   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 01:06:05.990173   27502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 01:06:05.990244   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:06:06.002326   27502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 01:06:06.002385   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:06:06.013743   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:06:06.025442   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:06:06.036718   27502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 01:06:06.048527   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:06:06.060094   27502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:06:06.079343   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:06:06.090601   27502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 01:06:06.100699   27502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 01:06:06.100778   27502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 01:06:06.114830   27502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 01:06:06.124586   27502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:06:06.246180   27502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 01:06:06.386523   27502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 01:06:06.386595   27502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 01:06:06.391480   27502 start.go:563] Will wait 60s for crictl version
	I0729 01:06:06.391535   27502 ssh_runner.go:195] Run: which crictl
	I0729 01:06:06.395224   27502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 01:06:06.448077   27502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 01:06:06.448174   27502 ssh_runner.go:195] Run: crio --version
	I0729 01:06:06.477971   27502 ssh_runner.go:195] Run: crio --version
	I0729 01:06:06.509233   27502 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 01:06:06.510624   27502 out.go:177]   - env NO_PROXY=192.168.39.69
	I0729 01:06:06.512009   27502 out.go:177]   - env NO_PROXY=192.168.39.69,192.168.39.68
	I0729 01:06:06.513160   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetIP
	I0729 01:06:06.515805   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:06.516145   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:06.516176   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:06.516327   27502 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 01:06:06.520609   27502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 01:06:06.533819   27502 mustload.go:65] Loading cluster: ha-845088
	I0729 01:06:06.534071   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:06:06.534419   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:06:06.534463   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:06:06.549210   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I0729 01:06:06.549644   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:06:06.550076   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:06:06.550093   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:06:06.550396   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:06:06.550591   27502 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:06:06.552250   27502 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:06:06.552528   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:06:06.552566   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:06:06.567532   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40023
	I0729 01:06:06.567966   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:06:06.568449   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:06:06.568470   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:06:06.568779   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:06:06.569014   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:06:06.569152   27502 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088 for IP: 192.168.39.243
	I0729 01:06:06.569169   27502 certs.go:194] generating shared ca certs ...
	I0729 01:06:06.569188   27502 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:06:06.569313   27502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 01:06:06.569349   27502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 01:06:06.569358   27502 certs.go:256] generating profile certs ...
	I0729 01:06:06.569434   27502 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key
	I0729 01:06:06.569473   27502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.1682affb
	I0729 01:06:06.569495   27502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.1682affb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.68 192.168.39.243 192.168.39.254]
	I0729 01:06:06.802077   27502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.1682affb ...
	I0729 01:06:06.802115   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.1682affb: {Name:mkd50706cc4400eb4c34783cde4de9c621fa6155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:06:06.802298   27502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.1682affb ...
	I0729 01:06:06.802313   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.1682affb: {Name:mkb018f03dff67b92381e70e7a91ba8bfe22d1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:06:06.802403   27502 certs.go:381] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.1682affb -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt
	I0729 01:06:06.802548   27502 certs.go:385] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.1682affb -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key
	I0729 01:06:06.802696   27502 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key
	I0729 01:06:06.802713   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 01:06:06.802733   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 01:06:06.802752   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 01:06:06.802771   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 01:06:06.802789   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 01:06:06.802807   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 01:06:06.802824   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 01:06:06.802839   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 01:06:06.802908   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 01:06:06.802949   27502 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 01:06:06.802964   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 01:06:06.802997   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 01:06:06.803028   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 01:06:06.803077   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 01:06:06.803142   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:06:06.803180   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:06:06.803199   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem -> /usr/share/ca-certificates/16623.pem
	I0729 01:06:06.803217   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /usr/share/ca-certificates/166232.pem
	I0729 01:06:06.803255   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:06:06.806378   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:06:06.806789   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:06:06.806816   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:06:06.807046   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:06:06.807275   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:06:06.807443   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:06:06.807611   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:06:06.883465   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 01:06:06.888917   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 01:06:06.900921   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 01:06:06.906724   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0729 01:06:06.921455   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 01:06:06.927215   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 01:06:06.940525   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 01:06:06.946114   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 01:06:06.961058   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 01:06:06.965520   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 01:06:06.984525   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 01:06:06.989032   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 01:06:07.001578   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 01:06:07.029642   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 01:06:07.054791   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 01:06:07.080274   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 01:06:07.104019   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0729 01:06:07.129132   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 01:06:07.154543   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 01:06:07.179757   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 01:06:07.204189   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 01:06:07.228019   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 01:06:07.253233   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 01:06:07.278740   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 01:06:07.296756   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0729 01:06:07.313630   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 01:06:07.331828   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 01:06:07.357423   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 01:06:07.380635   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 01:06:07.399096   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 01:06:07.416823   27502 ssh_runner.go:195] Run: openssl version
	I0729 01:06:07.422965   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 01:06:07.433712   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:06:07.438211   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:06:07.438265   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:06:07.443998   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 01:06:07.454684   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 01:06:07.465341   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 01:06:07.469898   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 01:06:07.469959   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 01:06:07.475841   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 01:06:07.486935   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 01:06:07.498173   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 01:06:07.503253   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 01:06:07.503303   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 01:06:07.509045   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 01:06:07.519571   27502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 01:06:07.523630   27502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 01:06:07.523687   27502 kubeadm.go:934] updating node {m03 192.168.39.243 8443 v1.30.3 crio true true} ...
	I0729 01:06:07.523788   27502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-845088-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 01:06:07.523822   27502 kube-vip.go:115] generating kube-vip config ...
	I0729 01:06:07.523867   27502 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 01:06:07.539345   27502 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 01:06:07.539415   27502 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 01:06:07.539479   27502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 01:06:07.549335   27502 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 01:06:07.549414   27502 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 01:06:07.559210   27502 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0729 01:06:07.559222   27502 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0729 01:06:07.559247   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 01:06:07.559263   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:06:07.559313   27502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 01:06:07.559210   27502 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 01:06:07.559383   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 01:06:07.559461   27502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 01:06:07.577126   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 01:06:07.577126   27502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 01:06:07.577203   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 01:06:07.577219   27502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 01:06:07.577203   27502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 01:06:07.577249   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 01:06:07.607613   27502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 01:06:07.607657   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 01:06:08.493286   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 01:06:08.503399   27502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 01:06:08.521569   27502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 01:06:08.539657   27502 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 01:06:08.558633   27502 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 01:06:08.562915   27502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 01:06:08.576034   27502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:06:08.701810   27502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:06:08.717937   27502 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:06:08.718364   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:06:08.718413   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:06:08.734339   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32875
	I0729 01:06:08.734859   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:06:08.735646   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:06:08.735675   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:06:08.736037   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:06:08.736235   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:06:08.736386   27502 start.go:317] joinCluster: &{Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:06:08.736538   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 01:06:08.736559   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:06:08.739516   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:06:08.739926   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:06:08.739943   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:06:08.740174   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:06:08.740326   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:06:08.740496   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:06:08.740619   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:06:08.910172   27502 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:06:08.910223   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5zqsql.wm8sxofz5f2yakhi --discovery-token-ca-cert-hash sha256:2259b3e93c5dd9b5daf5a1af8e350826f214305256ac858c5baa518ad685cc90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-845088-m03 --control-plane --apiserver-advertise-address=192.168.39.243 --apiserver-bind-port=8443"
	I0729 01:06:31.990460   27502 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5zqsql.wm8sxofz5f2yakhi --discovery-token-ca-cert-hash sha256:2259b3e93c5dd9b5daf5a1af8e350826f214305256ac858c5baa518ad685cc90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-845088-m03 --control-plane --apiserver-advertise-address=192.168.39.243 --apiserver-bind-port=8443": (23.080204816s)
	I0729 01:06:31.990493   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 01:06:32.529477   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-845088-m03 minikube.k8s.io/updated_at=2024_07_29T01_06_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=ha-845088 minikube.k8s.io/primary=false
	I0729 01:06:32.662980   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-845088-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 01:06:32.773580   27502 start.go:319] duration metric: took 24.037189575s to joinCluster
	I0729 01:06:32.773664   27502 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:06:32.774045   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:06:32.775339   27502 out.go:177] * Verifying Kubernetes components...
	I0729 01:06:32.776777   27502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:06:33.069249   27502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:06:33.120420   27502 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:06:33.120748   27502 kapi.go:59] client config for ha-845088: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key", CAFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 01:06:33.120846   27502 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.69:8443
	I0729 01:06:33.121095   27502 node_ready.go:35] waiting up to 6m0s for node "ha-845088-m03" to be "Ready" ...
	I0729 01:06:33.121176   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:33.121184   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:33.121195   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:33.121203   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:33.126534   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:06:33.621507   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:33.621529   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:33.621538   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:33.621546   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:33.625753   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:34.121777   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:34.121800   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:34.121811   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:34.121816   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:34.125751   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:34.621330   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:34.621379   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:34.621395   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:34.621402   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:34.626049   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:35.122166   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:35.122191   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:35.122204   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:35.122209   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:35.127744   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:06:35.128219   27502 node_ready.go:53] node "ha-845088-m03" has status "Ready":"False"
	I0729 01:06:35.622134   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:35.622164   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:35.622177   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:35.622183   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:35.627500   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:06:36.122190   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:36.122212   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:36.122220   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:36.122223   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:36.126009   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:36.621975   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:36.622000   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:36.622011   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:36.622017   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:36.625659   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:37.121698   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:37.121724   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:37.121736   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:37.121743   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:37.125547   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:37.621948   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:37.621973   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:37.621985   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:37.621992   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:37.626169   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:37.626850   27502 node_ready.go:53] node "ha-845088-m03" has status "Ready":"False"
	I0729 01:06:38.121948   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:38.121978   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:38.121990   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:38.121996   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:38.125808   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:38.621364   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:38.621385   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:38.621392   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:38.621396   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:38.625132   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:39.122128   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:39.122149   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:39.122159   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:39.122166   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:39.129509   27502 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 01:06:39.622138   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:39.622164   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:39.622176   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:39.622182   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:39.626023   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:40.121882   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:40.121906   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:40.121914   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:40.121917   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:40.125872   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:40.126455   27502 node_ready.go:53] node "ha-845088-m03" has status "Ready":"False"
	I0729 01:06:40.622278   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:40.622300   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:40.622310   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:40.622316   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:40.626476   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:41.121458   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:41.121478   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:41.121487   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:41.121491   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:41.125099   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:41.622300   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:41.622334   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:41.622341   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:41.622363   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:41.625936   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:42.122089   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:42.122108   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:42.122115   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:42.122120   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:42.126042   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:42.126658   27502 node_ready.go:53] node "ha-845088-m03" has status "Ready":"False"
	I0729 01:06:42.622300   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:42.622326   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:42.622339   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:42.622344   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:42.625647   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:43.121872   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:43.121892   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:43.121909   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:43.121913   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:43.125764   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:43.621927   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:43.621947   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:43.621955   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:43.621960   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:43.627407   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:06:44.122215   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:44.122237   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:44.122243   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:44.122248   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:44.125791   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:44.621423   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:44.621444   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:44.621452   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:44.621456   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:44.624728   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:44.625383   27502 node_ready.go:53] node "ha-845088-m03" has status "Ready":"False"
	I0729 01:06:45.121792   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:45.121818   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:45.121828   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:45.121836   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:45.125439   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:45.621762   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:45.621786   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:45.621795   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:45.621800   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:45.625405   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:46.121569   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:46.121590   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:46.121598   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:46.121601   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:46.125233   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:46.621723   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:46.621743   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:46.621754   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:46.621760   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:46.625514   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:46.626605   27502 node_ready.go:53] node "ha-845088-m03" has status "Ready":"False"
	I0729 01:06:47.122029   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:47.122054   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:47.122065   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:47.122070   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:47.125507   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:47.621740   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:47.621788   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:47.621800   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:47.621807   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:47.625683   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:48.121996   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:48.122019   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:48.122026   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:48.122034   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:48.126309   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:48.621519   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:48.621542   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:48.621553   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:48.621557   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:48.625147   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:49.122033   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:49.122052   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:49.122059   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:49.122063   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:49.125561   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:49.126204   27502 node_ready.go:53] node "ha-845088-m03" has status "Ready":"False"
	I0729 01:06:49.622018   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:49.622038   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:49.622049   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:49.622062   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:49.625592   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:50.121595   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:50.121618   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.121639   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.121656   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.124936   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:50.622298   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:50.622319   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.622327   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.622332   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.627435   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:06:50.627995   27502 node_ready.go:49] node "ha-845088-m03" has status "Ready":"True"
	I0729 01:06:50.628015   27502 node_ready.go:38] duration metric: took 17.506903062s for node "ha-845088-m03" to be "Ready" ...
	I0729 01:06:50.628023   27502 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 01:06:50.628087   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:06:50.628100   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.628107   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.628113   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.637007   27502 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 01:06:50.644759   27502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-26phs" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.644863   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-26phs
	I0729 01:06:50.644873   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.644883   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.644892   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.648644   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:50.649244   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:50.649260   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.649271   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.649275   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.651972   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:06:50.652545   27502 pod_ready.go:92] pod "coredns-7db6d8ff4d-26phs" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:50.652564   27502 pod_ready.go:81] duration metric: took 7.779242ms for pod "coredns-7db6d8ff4d-26phs" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.652576   27502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x4jjj" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.652640   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x4jjj
	I0729 01:06:50.652648   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.652655   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.652660   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.655020   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:06:50.655717   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:50.655732   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.655741   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.655747   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.659322   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:50.659818   27502 pod_ready.go:92] pod "coredns-7db6d8ff4d-x4jjj" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:50.659840   27502 pod_ready.go:81] duration metric: took 7.253994ms for pod "coredns-7db6d8ff4d-x4jjj" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.659849   27502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.659898   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-845088
	I0729 01:06:50.659907   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.659914   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.659918   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.662415   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:06:50.662913   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:50.662926   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.662934   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.662938   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.665332   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:06:50.665797   27502 pod_ready.go:92] pod "etcd-ha-845088" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:50.665815   27502 pod_ready.go:81] duration metric: took 5.960268ms for pod "etcd-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.665823   27502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.665875   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-845088-m02
	I0729 01:06:50.665882   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.665888   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.665893   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.668354   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:06:50.668806   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:50.668820   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.668827   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.668831   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.671325   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:06:50.671760   27502 pod_ready.go:92] pod "etcd-ha-845088-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:50.671777   27502 pod_ready.go:81] duration metric: took 5.946655ms for pod "etcd-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.671785   27502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-845088-m03" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.823180   27502 request.go:629] Waited for 151.318684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-845088-m03
	I0729 01:06:50.823241   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-845088-m03
	I0729 01:06:50.823246   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.823256   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.823264   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.826515   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:51.022605   27502 request.go:629] Waited for 195.358941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:51.022673   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:51.022679   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:51.022686   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:51.022690   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:51.026093   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:51.026822   27502 pod_ready.go:92] pod "etcd-ha-845088-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:51.026842   27502 pod_ready.go:81] duration metric: took 355.049089ms for pod "etcd-ha-845088-m03" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:51.026864   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:51.222995   27502 request.go:629] Waited for 196.062419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088
	I0729 01:06:51.223044   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088
	I0729 01:06:51.223049   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:51.223073   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:51.223079   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:51.226304   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:51.422404   27502 request.go:629] Waited for 195.275924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:51.422477   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:51.422482   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:51.422489   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:51.422493   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:51.426021   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:51.426717   27502 pod_ready.go:92] pod "kube-apiserver-ha-845088" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:51.426731   27502 pod_ready.go:81] duration metric: took 399.860523ms for pod "kube-apiserver-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:51.426741   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:51.622804   27502 request.go:629] Waited for 195.971586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088-m02
	I0729 01:06:51.622866   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088-m02
	I0729 01:06:51.622874   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:51.622888   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:51.622897   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:51.626804   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:51.823038   27502 request.go:629] Waited for 195.321561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:51.823118   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:51.823127   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:51.823135   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:51.823140   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:51.826588   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:51.827178   27502 pod_ready.go:92] pod "kube-apiserver-ha-845088-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:51.827199   27502 pod_ready.go:81] duration metric: took 400.449389ms for pod "kube-apiserver-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:51.827208   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-845088-m03" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:52.023303   27502 request.go:629] Waited for 196.027124ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088-m03
	I0729 01:06:52.023401   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088-m03
	I0729 01:06:52.023413   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:52.023424   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:52.023431   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:52.027029   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:52.223135   27502 request.go:629] Waited for 195.083537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:52.223187   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:52.223192   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:52.223201   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:52.223205   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:52.226835   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:52.227608   27502 pod_ready.go:92] pod "kube-apiserver-ha-845088-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:52.227629   27502 pod_ready.go:81] duration metric: took 400.413096ms for pod "kube-apiserver-ha-845088-m03" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:52.227641   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:52.423124   27502 request.go:629] Waited for 195.414268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088
	I0729 01:06:52.423213   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088
	I0729 01:06:52.423224   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:52.423234   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:52.423244   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:52.426879   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:52.623190   27502 request.go:629] Waited for 195.358566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:52.623245   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:52.623252   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:52.623262   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:52.623266   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:52.626629   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:52.627190   27502 pod_ready.go:92] pod "kube-controller-manager-ha-845088" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:52.627208   27502 pod_ready.go:81] duration metric: took 399.561032ms for pod "kube-controller-manager-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:52.627218   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:52.822296   27502 request.go:629] Waited for 195.014469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088-m02
	I0729 01:06:52.822379   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088-m02
	I0729 01:06:52.822385   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:52.822392   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:52.822397   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:52.826516   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:53.023340   27502 request.go:629] Waited for 196.262158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:53.023397   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:53.023402   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:53.023410   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:53.023417   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:53.026669   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:53.027273   27502 pod_ready.go:92] pod "kube-controller-manager-ha-845088-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:53.027290   27502 pod_ready.go:81] duration metric: took 400.066313ms for pod "kube-controller-manager-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:53.027300   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-845088-m03" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:53.222318   27502 request.go:629] Waited for 194.955355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088-m03
	I0729 01:06:53.222374   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088-m03
	I0729 01:06:53.222379   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:53.222387   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:53.222391   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:53.227575   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:06:53.423053   27502 request.go:629] Waited for 194.376949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:53.423127   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:53.423133   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:53.423140   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:53.423144   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:53.426733   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:53.427399   27502 pod_ready.go:92] pod "kube-controller-manager-ha-845088-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:53.427419   27502 pod_ready.go:81] duration metric: took 400.112689ms for pod "kube-controller-manager-ha-845088-m03" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:53.427429   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f4965" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:53.622494   27502 request.go:629] Waited for 195.005719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f4965
	I0729 01:06:53.622590   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f4965
	I0729 01:06:53.622602   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:53.622613   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:53.622621   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:53.626301   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:53.822925   27502 request.go:629] Waited for 195.789141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:53.822979   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:53.822985   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:53.822994   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:53.822999   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:53.827869   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:53.828629   27502 pod_ready.go:92] pod "kube-proxy-f4965" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:53.828645   27502 pod_ready.go:81] duration metric: took 401.210506ms for pod "kube-proxy-f4965" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:53.828654   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j6gxl" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:54.022753   27502 request.go:629] Waited for 194.019404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6gxl
	I0729 01:06:54.022808   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6gxl
	I0729 01:06:54.022815   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:54.022827   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:54.022838   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:54.026865   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:54.222914   27502 request.go:629] Waited for 195.356366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:54.222974   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:54.222980   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:54.223002   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:54.223023   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:54.226655   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:54.227272   27502 pod_ready.go:92] pod "kube-proxy-j6gxl" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:54.227292   27502 pod_ready.go:81] duration metric: took 398.631895ms for pod "kube-proxy-j6gxl" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:54.227306   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tmzt7" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:54.422738   27502 request.go:629] Waited for 195.363958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmzt7
	I0729 01:06:54.422789   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmzt7
	I0729 01:06:54.422793   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:54.422801   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:54.422806   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:54.425963   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:54.623126   27502 request.go:629] Waited for 196.438329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:54.623181   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:54.623189   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:54.623200   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:54.623211   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:54.626584   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:54.627203   27502 pod_ready.go:92] pod "kube-proxy-tmzt7" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:54.627224   27502 pod_ready.go:81] duration metric: took 399.909597ms for pod "kube-proxy-tmzt7" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:54.627236   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:54.822276   27502 request.go:629] Waited for 194.971609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088
	I0729 01:06:54.822343   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088
	I0729 01:06:54.822348   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:54.822356   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:54.822360   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:54.825734   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:55.022479   27502 request.go:629] Waited for 196.276271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:55.022554   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:55.022561   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:55.022571   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:55.022582   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:55.026037   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:55.026626   27502 pod_ready.go:92] pod "kube-scheduler-ha-845088" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:55.026643   27502 pod_ready.go:81] duration metric: took 399.399806ms for pod "kube-scheduler-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:55.026655   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:55.222698   27502 request.go:629] Waited for 195.97885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088-m02
	I0729 01:06:55.222750   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088-m02
	I0729 01:06:55.222756   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:55.222764   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:55.222770   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:55.227134   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:55.422294   27502 request.go:629] Waited for 194.282327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:55.422351   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:55.422357   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:55.422364   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:55.422368   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:55.425636   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:55.426269   27502 pod_ready.go:92] pod "kube-scheduler-ha-845088-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:55.426288   27502 pod_ready.go:81] duration metric: took 399.624394ms for pod "kube-scheduler-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:55.426302   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-845088-m03" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:55.622660   27502 request.go:629] Waited for 196.27777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088-m03
	I0729 01:06:55.622725   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088-m03
	I0729 01:06:55.622732   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:55.622743   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:55.622752   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:55.626385   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:55.822392   27502 request.go:629] Waited for 195.255482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:55.822441   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:55.822448   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:55.822455   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:55.822459   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:55.825634   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:55.826187   27502 pod_ready.go:92] pod "kube-scheduler-ha-845088-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:55.826205   27502 pod_ready.go:81] duration metric: took 399.895578ms for pod "kube-scheduler-ha-845088-m03" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:55.826223   27502 pod_ready.go:38] duration metric: took 5.198189101s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 01:06:55.826243   27502 api_server.go:52] waiting for apiserver process to appear ...
	I0729 01:06:55.826292   27502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:06:55.845508   27502 api_server.go:72] duration metric: took 23.071807835s to wait for apiserver process to appear ...
	I0729 01:06:55.845540   27502 api_server.go:88] waiting for apiserver healthz status ...
	I0729 01:06:55.845563   27502 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I0729 01:06:55.850162   27502 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I0729 01:06:55.850230   27502 round_trippers.go:463] GET https://192.168.39.69:8443/version
	I0729 01:06:55.850240   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:55.850251   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:55.850259   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:55.851222   27502 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 01:06:55.851293   27502 api_server.go:141] control plane version: v1.30.3
	I0729 01:06:55.851307   27502 api_server.go:131] duration metric: took 5.76055ms to wait for apiserver health ...
	I0729 01:06:55.851316   27502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 01:06:56.022703   27502 request.go:629] Waited for 171.322479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:06:56.022765   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:06:56.022770   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:56.022777   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:56.022781   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:56.029815   27502 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 01:06:56.036379   27502 system_pods.go:59] 24 kube-system pods found
	I0729 01:06:56.036410   27502 system_pods.go:61] "coredns-7db6d8ff4d-26phs" [0fa00166-935c-4e30-899d-0ae105083984] Running
	I0729 01:06:56.036417   27502 system_pods.go:61] "coredns-7db6d8ff4d-x4jjj" [659a9fc3-a597-401d-9ceb-71a04f049d8c] Running
	I0729 01:06:56.036421   27502 system_pods.go:61] "etcd-ha-845088" [eb889e81-3ece-4af1-8bce-9c3740e8209c] Running
	I0729 01:06:56.036427   27502 system_pods.go:61] "etcd-ha-845088-m02" [e1bd96c5-3618-4f17-aa55-4a0c227cb401] Running
	I0729 01:06:56.036430   27502 system_pods.go:61] "etcd-ha-845088-m03" [3a225030-386d-4e16-875f-bc5ecb3b2692] Running
	I0729 01:06:56.036435   27502 system_pods.go:61] "kindnet-fvw2k" [c0096f64-69dd-4a0f-853f-7798d413bde2] Running
	I0729 01:06:56.036438   27502 system_pods.go:61] "kindnet-jz7gr" [3d184fd2-5bfc-40bd-b7b3-98934d58a689] Running
	I0729 01:06:56.036442   27502 system_pods.go:61] "kindnet-p87gx" [07b16da9-2b6f-45b8-b9a4-0009e6d60925] Running
	I0729 01:06:56.036445   27502 system_pods.go:61] "kube-apiserver-ha-845088" [1fe50c6b-6497-498e-8f2a-c84c3dabdbb3] Running
	I0729 01:06:56.036448   27502 system_pods.go:61] "kube-apiserver-ha-845088-m02" [d7fef5ee-2f47-4b3b-b625-f146578f3164] Running
	I0729 01:06:56.036451   27502 system_pods.go:61] "kube-apiserver-ha-845088-m03" [3062f069-6eba-4418-9778-43689dab75bb] Running
	I0729 01:06:56.036455   27502 system_pods.go:61] "kube-controller-manager-ha-845088" [e58772fb-6dcd-431c-ba7b-cf726504c97e] Running
	I0729 01:06:56.036459   27502 system_pods.go:61] "kube-controller-manager-ha-845088-m02" [e8811503-c081-430f-9191-e1cf1fa1a866] Running
	I0729 01:06:56.036463   27502 system_pods.go:61] "kube-controller-manager-ha-845088-m03" [71e94457-a846-4756-ab5e-9373344a5f4a] Running
	I0729 01:06:56.036469   27502 system_pods.go:61] "kube-proxy-f4965" [23788f31-afa6-43f9-b5ec-2facd23efe4e] Running
	I0729 01:06:56.036472   27502 system_pods.go:61] "kube-proxy-j6gxl" [45f77cb8-2b41-4069-8468-6defe7e0f51e] Running
	I0729 01:06:56.036475   27502 system_pods.go:61] "kube-proxy-tmzt7" [f2e92bb0-87c0-4d4e-ae34-d67970a61dc9] Running
	I0729 01:06:56.036479   27502 system_pods.go:61] "kube-scheduler-ha-845088" [8dd2df88-eb98-4220-a7f5-fe78bd302573] Running
	I0729 01:06:56.036483   27502 system_pods.go:61] "kube-scheduler-ha-845088-m02" [ca68c56a-ffbe-43be-b452-bd6bd7c508ba] Running
	I0729 01:06:56.036486   27502 system_pods.go:61] "kube-scheduler-ha-845088-m03" [a7e34040-d0d4-453a-bc66-d826c253a9e5] Running
	I0729 01:06:56.036489   27502 system_pods.go:61] "kube-vip-ha-845088" [23429e30-003b-4bf2-9ab0-fb4d2a2ee5c8] Running
	I0729 01:06:56.036494   27502 system_pods.go:61] "kube-vip-ha-845088-m02" [4716aa15-53c6-4f56-98a4-1b0697bb355d] Running
	I0729 01:06:56.036497   27502 system_pods.go:61] "kube-vip-ha-845088-m03" [5b8e796c-8556-4cc1-a46d-7c4c23fc43df] Running
	I0729 01:06:56.036500   27502 system_pods.go:61] "storage-provisioner" [9b770bc2-7368-4b86-89ff-399d60f17817] Running
	I0729 01:06:56.036506   27502 system_pods.go:74] duration metric: took 185.184729ms to wait for pod list to return data ...
	I0729 01:06:56.036516   27502 default_sa.go:34] waiting for default service account to be created ...
	I0729 01:06:56.222913   27502 request.go:629] Waited for 186.333292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I0729 01:06:56.222964   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I0729 01:06:56.222968   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:56.222976   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:56.222979   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:56.226385   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:56.226513   27502 default_sa.go:45] found service account: "default"
	I0729 01:06:56.226530   27502 default_sa.go:55] duration metric: took 190.008463ms for default service account to be created ...
	I0729 01:06:56.226537   27502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 01:06:56.422888   27502 request.go:629] Waited for 196.263264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:06:56.422952   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:06:56.422962   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:56.422973   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:56.422980   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:56.430352   27502 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 01:06:56.437114   27502 system_pods.go:86] 24 kube-system pods found
	I0729 01:06:56.437138   27502 system_pods.go:89] "coredns-7db6d8ff4d-26phs" [0fa00166-935c-4e30-899d-0ae105083984] Running
	I0729 01:06:56.437144   27502 system_pods.go:89] "coredns-7db6d8ff4d-x4jjj" [659a9fc3-a597-401d-9ceb-71a04f049d8c] Running
	I0729 01:06:56.437148   27502 system_pods.go:89] "etcd-ha-845088" [eb889e81-3ece-4af1-8bce-9c3740e8209c] Running
	I0729 01:06:56.437153   27502 system_pods.go:89] "etcd-ha-845088-m02" [e1bd96c5-3618-4f17-aa55-4a0c227cb401] Running
	I0729 01:06:56.437158   27502 system_pods.go:89] "etcd-ha-845088-m03" [3a225030-386d-4e16-875f-bc5ecb3b2692] Running
	I0729 01:06:56.437165   27502 system_pods.go:89] "kindnet-fvw2k" [c0096f64-69dd-4a0f-853f-7798d413bde2] Running
	I0729 01:06:56.437170   27502 system_pods.go:89] "kindnet-jz7gr" [3d184fd2-5bfc-40bd-b7b3-98934d58a689] Running
	I0729 01:06:56.437180   27502 system_pods.go:89] "kindnet-p87gx" [07b16da9-2b6f-45b8-b9a4-0009e6d60925] Running
	I0729 01:06:56.437186   27502 system_pods.go:89] "kube-apiserver-ha-845088" [1fe50c6b-6497-498e-8f2a-c84c3dabdbb3] Running
	I0729 01:06:56.437194   27502 system_pods.go:89] "kube-apiserver-ha-845088-m02" [d7fef5ee-2f47-4b3b-b625-f146578f3164] Running
	I0729 01:06:56.437201   27502 system_pods.go:89] "kube-apiserver-ha-845088-m03" [3062f069-6eba-4418-9778-43689dab75bb] Running
	I0729 01:06:56.437207   27502 system_pods.go:89] "kube-controller-manager-ha-845088" [e58772fb-6dcd-431c-ba7b-cf726504c97e] Running
	I0729 01:06:56.437214   27502 system_pods.go:89] "kube-controller-manager-ha-845088-m02" [e8811503-c081-430f-9191-e1cf1fa1a866] Running
	I0729 01:06:56.437219   27502 system_pods.go:89] "kube-controller-manager-ha-845088-m03" [71e94457-a846-4756-ab5e-9373344a5f4a] Running
	I0729 01:06:56.437225   27502 system_pods.go:89] "kube-proxy-f4965" [23788f31-afa6-43f9-b5ec-2facd23efe4e] Running
	I0729 01:06:56.437229   27502 system_pods.go:89] "kube-proxy-j6gxl" [45f77cb8-2b41-4069-8468-6defe7e0f51e] Running
	I0729 01:06:56.437235   27502 system_pods.go:89] "kube-proxy-tmzt7" [f2e92bb0-87c0-4d4e-ae34-d67970a61dc9] Running
	I0729 01:06:56.437239   27502 system_pods.go:89] "kube-scheduler-ha-845088" [8dd2df88-eb98-4220-a7f5-fe78bd302573] Running
	I0729 01:06:56.437245   27502 system_pods.go:89] "kube-scheduler-ha-845088-m02" [ca68c56a-ffbe-43be-b452-bd6bd7c508ba] Running
	I0729 01:06:56.437250   27502 system_pods.go:89] "kube-scheduler-ha-845088-m03" [a7e34040-d0d4-453a-bc66-d826c253a9e5] Running
	I0729 01:06:56.437256   27502 system_pods.go:89] "kube-vip-ha-845088" [23429e30-003b-4bf2-9ab0-fb4d2a2ee5c8] Running
	I0729 01:06:56.437260   27502 system_pods.go:89] "kube-vip-ha-845088-m02" [4716aa15-53c6-4f56-98a4-1b0697bb355d] Running
	I0729 01:06:56.437268   27502 system_pods.go:89] "kube-vip-ha-845088-m03" [5b8e796c-8556-4cc1-a46d-7c4c23fc43df] Running
	I0729 01:06:56.437276   27502 system_pods.go:89] "storage-provisioner" [9b770bc2-7368-4b86-89ff-399d60f17817] Running
	I0729 01:06:56.437287   27502 system_pods.go:126] duration metric: took 210.741737ms to wait for k8s-apps to be running ...
	I0729 01:06:56.437299   27502 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 01:06:56.437347   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:06:56.454489   27502 system_svc.go:56] duration metric: took 17.1809ms WaitForService to wait for kubelet
	I0729 01:06:56.454519   27502 kubeadm.go:582] duration metric: took 23.680824506s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 01:06:56.454543   27502 node_conditions.go:102] verifying NodePressure condition ...
	I0729 01:06:56.622609   27502 request.go:629] Waited for 167.986442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes
	I0729 01:06:56.622694   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes
	I0729 01:06:56.622701   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:56.622711   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:56.622716   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:56.627702   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:56.628848   27502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 01:06:56.628880   27502 node_conditions.go:123] node cpu capacity is 2
	I0729 01:06:56.628894   27502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 01:06:56.628899   27502 node_conditions.go:123] node cpu capacity is 2
	I0729 01:06:56.628904   27502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 01:06:56.628909   27502 node_conditions.go:123] node cpu capacity is 2
	I0729 01:06:56.628915   27502 node_conditions.go:105] duration metric: took 174.365815ms to run NodePressure ...
	I0729 01:06:56.628932   27502 start.go:241] waiting for startup goroutines ...
	I0729 01:06:56.628959   27502 start.go:255] writing updated cluster config ...
	I0729 01:06:56.629322   27502 ssh_runner.go:195] Run: rm -f paused
	I0729 01:06:56.683819   27502 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 01:06:56.685924   27502 out.go:177] * Done! kubectl is now configured to use "ha-845088" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.240418784Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722215438240393327,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e31e21c-0902-43d9-83e3-da8bdd702e0e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.241123891Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8224de88-4810-4b36-924b-896d5f5ae612 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.241181653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8224de88-4810-4b36-924b-896d5f5ae612 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.241398726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:393f89e96685f53ad45043741e5cdeea2a14ac868361b8ec5d1c99fb7fcb80fd,PodSandboxId:077fc92624630d9345f559e83fcc88623c9c9da78c83f2fd03558dbe231bf392,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722215220870631423,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annotations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd54eae7304e5182e5293704abdceb4e9ffd712fa08fad6b3d967463872f0eec,PodSandboxId:0f3c4c82eabf728e46f1292a4d06691059f18ba04ba3d2db8f5e114774d74e19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722215067514800424,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6,PodSandboxId:860aff47921080f197906689ebdac24d8f2d07ce79c9792da378416aeb0b0556,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215067519965802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kubernetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87,PodSandboxId:5998a0c18499b323d8b2f065294e71b0f1b83d8d7e0689683aa373fd912f2676,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215067480426326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a5
97-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198,PodSandboxId:d036858417b617bd3d07094718128ed94a829b79a04481e222a4d007a8cced8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722215055323413886,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8,PodSandboxId:a37edf1e80380d902c014ad30352a41536c6dd919531118f5bfdff6b318b36b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172221505
0132743165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994e26254fd085e2926edf9c656aad1b17c748a39170b459396f42bc335f1b37,PodSandboxId:e6d68b2b55c9842c1d399a7b1fab0b904a885eb0d2000328da1eea0883ec2655,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222150328
94753496,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f4843ded93a5745feef920f67d7033d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46,PodSandboxId:00d828e6fd11cbd1fb3e98ce4070370f2935ac47836270d51eb66a8b845ac201,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722215029963540928,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60,PodSandboxId:64651fd976b6f146df0a71675e4e22c563cd375d3f5da24cf2a480bc054c63af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722215029937490257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0d5f5418f21962309391e2fc61b9ab31ab12afa2e057a4a8bbecf46d934d4c,PodSandboxId:35638eec4b1817e80841b56fd242d92c9a4b263f0d6d53c24eb00c6974712e68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722215029884152650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f40f9b4c14412e1f58e289c0f05c0df36143bb9d0e662b8e6a5ab96bc84ff5,PodSandboxId:88c63df98913c4ba58c90d9d1361d7d198cbb7a524227602b69b52b9e7db9b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722215029837706165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8224de88-4810-4b36-924b-896d5f5ae612 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.286155293Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb886059-1f63-47f2-ae3f-1b6280b38002 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.286244490Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb886059-1f63-47f2-ae3f-1b6280b38002 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.287273647Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04bdd799-4c4f-49b2-97f4-bf166cad3540 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.287758544Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722215438287733905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04bdd799-4c4f-49b2-97f4-bf166cad3540 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.288343849Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c173dce0-7627-4e8f-8393-446c11c7a210 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.288416969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c173dce0-7627-4e8f-8393-446c11c7a210 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.288647080Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:393f89e96685f53ad45043741e5cdeea2a14ac868361b8ec5d1c99fb7fcb80fd,PodSandboxId:077fc92624630d9345f559e83fcc88623c9c9da78c83f2fd03558dbe231bf392,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722215220870631423,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annotations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd54eae7304e5182e5293704abdceb4e9ffd712fa08fad6b3d967463872f0eec,PodSandboxId:0f3c4c82eabf728e46f1292a4d06691059f18ba04ba3d2db8f5e114774d74e19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722215067514800424,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6,PodSandboxId:860aff47921080f197906689ebdac24d8f2d07ce79c9792da378416aeb0b0556,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215067519965802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kubernetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87,PodSandboxId:5998a0c18499b323d8b2f065294e71b0f1b83d8d7e0689683aa373fd912f2676,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215067480426326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a5
97-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198,PodSandboxId:d036858417b617bd3d07094718128ed94a829b79a04481e222a4d007a8cced8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722215055323413886,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8,PodSandboxId:a37edf1e80380d902c014ad30352a41536c6dd919531118f5bfdff6b318b36b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172221505
0132743165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994e26254fd085e2926edf9c656aad1b17c748a39170b459396f42bc335f1b37,PodSandboxId:e6d68b2b55c9842c1d399a7b1fab0b904a885eb0d2000328da1eea0883ec2655,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222150328
94753496,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f4843ded93a5745feef920f67d7033d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46,PodSandboxId:00d828e6fd11cbd1fb3e98ce4070370f2935ac47836270d51eb66a8b845ac201,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722215029963540928,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60,PodSandboxId:64651fd976b6f146df0a71675e4e22c563cd375d3f5da24cf2a480bc054c63af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722215029937490257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0d5f5418f21962309391e2fc61b9ab31ab12afa2e057a4a8bbecf46d934d4c,PodSandboxId:35638eec4b1817e80841b56fd242d92c9a4b263f0d6d53c24eb00c6974712e68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722215029884152650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f40f9b4c14412e1f58e289c0f05c0df36143bb9d0e662b8e6a5ab96bc84ff5,PodSandboxId:88c63df98913c4ba58c90d9d1361d7d198cbb7a524227602b69b52b9e7db9b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722215029837706165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c173dce0-7627-4e8f-8393-446c11c7a210 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.332702918Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=453fee06-25ed-4102-a985-585c492e1b4e name=/runtime.v1.RuntimeService/Version
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.332825377Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=453fee06-25ed-4102-a985-585c492e1b4e name=/runtime.v1.RuntimeService/Version
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.334380941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50a36c0d-ac9d-442c-8ec0-4803b669ee59 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.334838533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722215438334817408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50a36c0d-ac9d-442c-8ec0-4803b669ee59 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.335664174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fcaa901-fdc9-4468-9422-887047ea13f2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.335738926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fcaa901-fdc9-4468-9422-887047ea13f2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.335974988Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:393f89e96685f53ad45043741e5cdeea2a14ac868361b8ec5d1c99fb7fcb80fd,PodSandboxId:077fc92624630d9345f559e83fcc88623c9c9da78c83f2fd03558dbe231bf392,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722215220870631423,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annotations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd54eae7304e5182e5293704abdceb4e9ffd712fa08fad6b3d967463872f0eec,PodSandboxId:0f3c4c82eabf728e46f1292a4d06691059f18ba04ba3d2db8f5e114774d74e19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722215067514800424,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6,PodSandboxId:860aff47921080f197906689ebdac24d8f2d07ce79c9792da378416aeb0b0556,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215067519965802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kubernetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87,PodSandboxId:5998a0c18499b323d8b2f065294e71b0f1b83d8d7e0689683aa373fd912f2676,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215067480426326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a5
97-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198,PodSandboxId:d036858417b617bd3d07094718128ed94a829b79a04481e222a4d007a8cced8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722215055323413886,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8,PodSandboxId:a37edf1e80380d902c014ad30352a41536c6dd919531118f5bfdff6b318b36b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172221505
0132743165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994e26254fd085e2926edf9c656aad1b17c748a39170b459396f42bc335f1b37,PodSandboxId:e6d68b2b55c9842c1d399a7b1fab0b904a885eb0d2000328da1eea0883ec2655,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222150328
94753496,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f4843ded93a5745feef920f67d7033d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46,PodSandboxId:00d828e6fd11cbd1fb3e98ce4070370f2935ac47836270d51eb66a8b845ac201,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722215029963540928,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60,PodSandboxId:64651fd976b6f146df0a71675e4e22c563cd375d3f5da24cf2a480bc054c63af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722215029937490257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0d5f5418f21962309391e2fc61b9ab31ab12afa2e057a4a8bbecf46d934d4c,PodSandboxId:35638eec4b1817e80841b56fd242d92c9a4b263f0d6d53c24eb00c6974712e68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722215029884152650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f40f9b4c14412e1f58e289c0f05c0df36143bb9d0e662b8e6a5ab96bc84ff5,PodSandboxId:88c63df98913c4ba58c90d9d1361d7d198cbb7a524227602b69b52b9e7db9b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722215029837706165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fcaa901-fdc9-4468-9422-887047ea13f2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.380300308Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22824743-ad82-4615-8c27-9ce202184b1e name=/runtime.v1.RuntimeService/Version
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.380408448Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22824743-ad82-4615-8c27-9ce202184b1e name=/runtime.v1.RuntimeService/Version
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.381266147Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=868ef947-7c81-42fb-b4ff-59b1dc02b688 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.381970883Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722215438381947466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=868ef947-7c81-42fb-b4ff-59b1dc02b688 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.382587250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d857c9c4-c054-4dd5-9117-968e1cad1055 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.382643076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d857c9c4-c054-4dd5-9117-968e1cad1055 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:10:38 ha-845088 crio[684]: time="2024-07-29 01:10:38.382874890Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:393f89e96685f53ad45043741e5cdeea2a14ac868361b8ec5d1c99fb7fcb80fd,PodSandboxId:077fc92624630d9345f559e83fcc88623c9c9da78c83f2fd03558dbe231bf392,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722215220870631423,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annotations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd54eae7304e5182e5293704abdceb4e9ffd712fa08fad6b3d967463872f0eec,PodSandboxId:0f3c4c82eabf728e46f1292a4d06691059f18ba04ba3d2db8f5e114774d74e19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722215067514800424,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6,PodSandboxId:860aff47921080f197906689ebdac24d8f2d07ce79c9792da378416aeb0b0556,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215067519965802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kubernetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87,PodSandboxId:5998a0c18499b323d8b2f065294e71b0f1b83d8d7e0689683aa373fd912f2676,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215067480426326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a5
97-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198,PodSandboxId:d036858417b617bd3d07094718128ed94a829b79a04481e222a4d007a8cced8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722215055323413886,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8,PodSandboxId:a37edf1e80380d902c014ad30352a41536c6dd919531118f5bfdff6b318b36b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172221505
0132743165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994e26254fd085e2926edf9c656aad1b17c748a39170b459396f42bc335f1b37,PodSandboxId:e6d68b2b55c9842c1d399a7b1fab0b904a885eb0d2000328da1eea0883ec2655,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222150328
94753496,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f4843ded93a5745feef920f67d7033d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46,PodSandboxId:00d828e6fd11cbd1fb3e98ce4070370f2935ac47836270d51eb66a8b845ac201,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722215029963540928,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60,PodSandboxId:64651fd976b6f146df0a71675e4e22c563cd375d3f5da24cf2a480bc054c63af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722215029937490257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0d5f5418f21962309391e2fc61b9ab31ab12afa2e057a4a8bbecf46d934d4c,PodSandboxId:35638eec4b1817e80841b56fd242d92c9a4b263f0d6d53c24eb00c6974712e68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722215029884152650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f40f9b4c14412e1f58e289c0f05c0df36143bb9d0e662b8e6a5ab96bc84ff5,PodSandboxId:88c63df98913c4ba58c90d9d1361d7d198cbb7a524227602b69b52b9e7db9b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722215029837706165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d857c9c4-c054-4dd5-9117-968e1cad1055 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	393f89e96685f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   077fc92624630       busybox-fc5497c4f-kdxhf
	102a2205a11ac       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   860aff4792108       coredns-7db6d8ff4d-26phs
	dd54eae7304e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   0f3c4c82eabf7       storage-provisioner
	4c9a1e2ce8399       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   5998a0c18499b       coredns-7db6d8ff4d-x4jjj
	b117823d9ea03       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   d036858417b61       kindnet-jz7gr
	ba58523a71dfb       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   a37edf1e80380       kube-proxy-tmzt7
	994e26254fd08       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   e6d68b2b55c98       kube-vip-ha-845088
	2d545f40bcf5d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   00d828e6fd11c       etcd-ha-845088
	71cb29192a2ff       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   64651fd976b6f       kube-scheduler-ha-845088
	2f0d5f5418f21       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   35638eec4b181       kube-controller-manager-ha-845088
	32f40f9b4c144       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   88c63df98913c       kube-apiserver-ha-845088
	
	
	==> coredns [102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6] <==
	[INFO] 10.244.1.2:39393 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00025886s
	[INFO] 10.244.0.4:38271 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097383s
	[INFO] 10.244.0.4:51459 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018403s
	[INFO] 10.244.0.4:45452 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000208939s
	[INFO] 10.244.0.4:33630 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083936s
	[INFO] 10.244.0.4:56145 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107111s
	[INFO] 10.244.0.4:49547 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013737s
	[INFO] 10.244.2.2:50551 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157425s
	[INFO] 10.244.2.2:54720 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002000849s
	[INFO] 10.244.2.2:46977 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133922s
	[INFO] 10.244.2.2:52278 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098427s
	[INFO] 10.244.2.2:33523 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166768s
	[INFO] 10.244.2.2:56762 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127309s
	[INFO] 10.244.1.2:60690 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162836s
	[INFO] 10.244.0.4:53481 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125124s
	[INFO] 10.244.0.4:36302 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006046s
	[INFO] 10.244.2.2:51131 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200754s
	[INFO] 10.244.2.2:35216 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135186s
	[INFO] 10.244.2.2:47188 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095941s
	[INFO] 10.244.2.2:45175 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088023s
	[INFO] 10.244.1.2:53946 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000271227s
	[INFO] 10.244.0.4:35507 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089711s
	[INFO] 10.244.0.4:48138 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000191709s
	[INFO] 10.244.2.2:46681 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084718s
	[INFO] 10.244.2.2:58403 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000190529s
	
	
	==> coredns [4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87] <==
	[INFO] 10.244.0.4:49094 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000115404s
	[INFO] 10.244.2.2:58484 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183253s
	[INFO] 10.244.2.2:50917 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000093443s
	[INFO] 10.244.1.2:40330 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137004s
	[INFO] 10.244.1.2:40312 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003445772s
	[INFO] 10.244.1.2:54896 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000275281s
	[INFO] 10.244.1.2:36709 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149351s
	[INFO] 10.244.1.2:35599 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014616s
	[INFO] 10.244.1.2:40232 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145035s
	[INFO] 10.244.0.4:42879 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002077041s
	[INFO] 10.244.0.4:46236 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001377262s
	[INFO] 10.244.2.2:60143 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018397s
	[INFO] 10.244.2.2:33059 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001229041s
	[INFO] 10.244.1.2:50949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114887s
	[INFO] 10.244.1.2:41895 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099234s
	[INFO] 10.244.1.2:57885 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008087s
	[INFO] 10.244.0.4:46809 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000202377s
	[INFO] 10.244.0.4:54702 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067695s
	[INFO] 10.244.1.2:33676 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193639s
	[INFO] 10.244.1.2:35018 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014376s
	[INFO] 10.244.1.2:58362 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164011s
	[INFO] 10.244.0.4:42745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108289s
	[INFO] 10.244.0.4:38059 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000080482s
	[INFO] 10.244.2.2:57416 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132756s
	[INFO] 10.244.2.2:34696 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000282968s
	
	
	==> describe nodes <==
	Name:               ha-845088
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-845088
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=ha-845088
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T01_03_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:03:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-845088
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:10:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:07:30 +0000   Mon, 29 Jul 2024 01:03:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:07:30 +0000   Mon, 29 Jul 2024 01:03:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:07:30 +0000   Mon, 29 Jul 2024 01:03:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:07:30 +0000   Mon, 29 Jul 2024 01:04:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    ha-845088
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbb04d72e92946e88c1da68d30c7bef3
	  System UUID:                fbb04d72-e929-46e8-8c1d-a68d30c7bef3
	  Boot ID:                    8609abf0-fb2f-4316-bc25-edde00b876e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kdxhf              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 coredns-7db6d8ff4d-26phs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m29s
	  kube-system                 coredns-7db6d8ff4d-x4jjj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m29s
	  kube-system                 etcd-ha-845088                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m42s
	  kube-system                 kindnet-jz7gr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m30s
	  kube-system                 kube-apiserver-ha-845088             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	  kube-system                 kube-controller-manager-ha-845088    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	  kube-system                 kube-proxy-tmzt7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 kube-scheduler-ha-845088             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	  kube-system                 kube-vip-ha-845088                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m28s  kube-proxy       
	  Normal  Starting                 6m42s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m42s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m42s  kubelet          Node ha-845088 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m42s  kubelet          Node ha-845088 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m42s  kubelet          Node ha-845088 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m30s  node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	  Normal  NodeReady                6m12s  kubelet          Node ha-845088 status is now: NodeReady
	  Normal  RegisteredNode           5m7s   node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	  Normal  RegisteredNode           3m52s  node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	
	
	Name:               ha-845088-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-845088-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=ha-845088
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T01_05_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:05:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-845088-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:08:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 01:07:15 +0000   Mon, 29 Jul 2024 01:08:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 01:07:15 +0000   Mon, 29 Jul 2024 01:08:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 01:07:15 +0000   Mon, 29 Jul 2024 01:08:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 01:07:15 +0000   Mon, 29 Jul 2024 01:08:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-845088-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 71d77df4f03a4876b498a96bcef9ff64
	  System UUID:                71d77df4-f03a-4876-b498-a96bcef9ff64
	  Boot ID:                    9f6c4b85-e410-4558-8767-01550bcc9b1c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dbfgn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 etcd-ha-845088-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m23s
	  kube-system                 kindnet-p87gx                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m25s
	  kube-system                 kube-apiserver-ha-845088-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  kube-system                 kube-controller-manager-ha-845088-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  kube-system                 kube-proxy-j6gxl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-scheduler-ha-845088-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-vip-ha-845088-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m20s                  kube-proxy       
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m25s (x8 over 5m25s)  kubelet          Node ha-845088-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x8 over 5m25s)  kubelet          Node ha-845088-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x7 over 5m25s)  kubelet          Node ha-845088-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m7s                   node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	  Normal  NodeNotReady             100s                   node-controller  Node ha-845088-m02 status is now: NodeNotReady
	
	
	Name:               ha-845088-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-845088-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=ha-845088
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T01_06_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:06:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-845088-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:10:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:07:30 +0000   Mon, 29 Jul 2024 01:06:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:07:30 +0000   Mon, 29 Jul 2024 01:06:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:07:30 +0000   Mon, 29 Jul 2024 01:06:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:07:30 +0000   Mon, 29 Jul 2024 01:06:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.243
	  Hostname:    ha-845088-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a156142ecc543bebea07e4da7f3d99e
	  System UUID:                1a156142-ecc5-43be-bea0-7e4da7f3d99e
	  Boot ID:                    cfe16ffe-c16a-4205-be07-6a555787e997
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wvsr6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 etcd-ha-845088-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m8s
	  kube-system                 kindnet-fvw2k                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m10s
	  kube-system                 kube-apiserver-ha-845088-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-controller-manager-ha-845088-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-proxy-f4965                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-scheduler-ha-845088-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-vip-ha-845088-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node ha-845088-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node ha-845088-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node ha-845088-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-845088-m03 event: Registered Node ha-845088-m03 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-845088-m03 event: Registered Node ha-845088-m03 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-845088-m03 event: Registered Node ha-845088-m03 in Controller
	
	
	Name:               ha-845088-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-845088-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=ha-845088
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T01_07_37_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:07:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-845088-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:10:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:08:07 +0000   Mon, 29 Jul 2024 01:07:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:08:07 +0000   Mon, 29 Jul 2024 01:07:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:08:07 +0000   Mon, 29 Jul 2024 01:07:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:08:07 +0000   Mon, 29 Jul 2024 01:07:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.136
	  Hostname:    ha-845088-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f15978c17b794a0dab280aaa8e6fe8a4
	  System UUID:                f15978c1-7b79-4a0d-ab28-0aaa8e6fe8a4
	  Boot ID:                    0bfe37db-c4f2-4e8b-9f45-1737af272bfb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rffd2       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m1s
	  kube-system                 kube-proxy-bbp9f    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-845088-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-845088-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-845088-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m                   node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-845088-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul29 01:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050829] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039959] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779563] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.551910] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.576682] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.177713] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.054473] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057858] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.159603] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.120915] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.261683] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.164596] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +4.624660] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.060939] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.270727] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.083870] kauditd_printk_skb: 79 callbacks suppressed
	[Jul29 01:04] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.392423] kauditd_printk_skb: 29 callbacks suppressed
	[Jul29 01:05] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46] <==
	{"level":"warn","ts":"2024-07-29T01:10:38.636203Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.663068Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.670827Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.674542Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.694598Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.709959Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.723501Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.728929Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.732925Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.735594Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.744626Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.750414Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.752558Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.762684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.768491Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.773144Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.783114Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.79578Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.802826Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.806942Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.810305Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.815583Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.823626Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.831953Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:10:38.837096Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 01:10:38 up 7 min,  0 users,  load average: 0.33, 0.39, 0.20
	Linux ha-845088 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198] <==
	I0729 01:10:06.408314       1 main.go:322] Node ha-845088-m03 has CIDR [10.244.2.0/24] 
	I0729 01:10:16.407596       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:10:16.407626       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:10:16.407761       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:10:16.407787       1 main.go:299] handling current node
	I0729 01:10:16.407799       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:10:16.407804       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:10:16.407860       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0729 01:10:16.407865       1 main.go:322] Node ha-845088-m03 has CIDR [10.244.2.0/24] 
	I0729 01:10:26.414864       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:10:26.414992       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:10:26.415256       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:10:26.415294       1 main.go:299] handling current node
	I0729 01:10:26.415315       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:10:26.415323       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:10:26.415414       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0729 01:10:26.415444       1 main.go:322] Node ha-845088-m03 has CIDR [10.244.2.0/24] 
	I0729 01:10:36.414631       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:10:36.414714       1 main.go:299] handling current node
	I0729 01:10:36.414745       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:10:36.414754       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:10:36.414942       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0729 01:10:36.414966       1 main.go:322] Node ha-845088-m03 has CIDR [10.244.2.0/24] 
	I0729 01:10:36.415089       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:10:36.415111       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [32f40f9b4c14412e1f58e289c0f05c0df36143bb9d0e662b8e6a5ab96bc84ff5] <==
	I0729 01:03:56.166924       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 01:03:56.190107       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 01:03:56.204710       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 01:04:08.624663       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0729 01:04:09.319132       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0729 01:07:02.499914       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45894: use of closed network connection
	E0729 01:07:02.709908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45926: use of closed network connection
	E0729 01:07:02.904847       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45946: use of closed network connection
	E0729 01:07:03.125741       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45970: use of closed network connection
	E0729 01:07:03.327461       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45992: use of closed network connection
	E0729 01:07:03.511852       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46010: use of closed network connection
	E0729 01:07:03.688089       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46016: use of closed network connection
	E0729 01:07:03.874804       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46038: use of closed network connection
	E0729 01:07:04.067333       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46056: use of closed network connection
	E0729 01:07:04.359571       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46086: use of closed network connection
	E0729 01:07:04.538798       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46114: use of closed network connection
	E0729 01:07:04.722192       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46140: use of closed network connection
	E0729 01:07:04.908798       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46144: use of closed network connection
	E0729 01:07:05.114947       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46156: use of closed network connection
	E0729 01:07:05.332224       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46178: use of closed network connection
	I0729 01:07:41.128643       1 trace.go:236] Trace[1411284324]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:01f87a8c-35ed-4845-8e04-6282cab007be,client:192.168.39.254,api-group:coordination.k8s.io,api-version:v1,name:ha-845088,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-845088,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:PUT (29-Jul-2024 01:07:40.627) (total time: 501ms):
	Trace[1411284324]: ["GuaranteedUpdate etcd3" audit-id:01f87a8c-35ed-4845-8e04-6282cab007be,key:/leases/kube-node-lease/ha-845088,type:*coordination.Lease,resource:leases.coordination.k8s.io 500ms (01:07:40.627)
	Trace[1411284324]:  ---"Txn call completed" 499ms (01:07:41.128)]
	Trace[1411284324]: [501.02857ms] [501.02857ms] END
	W0729 01:08:24.986762       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.243 192.168.39.69]
	
	
	==> kube-controller-manager [2f0d5f5418f21962309391e2fc61b9ab31ab12afa2e057a4a8bbecf46d934d4c] <==
	I0729 01:06:57.852936       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="209.137318ms"
	I0729 01:06:57.933836       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.607458ms"
	I0729 01:06:57.987372       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.478288ms"
	I0729 01:06:57.987505       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.447µs"
	I0729 01:06:58.110396       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.228638ms"
	I0729 01:06:58.111457       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.469µs"
	I0729 01:06:58.835097       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.763µs"
	I0729 01:06:59.038789       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.09µs"
	I0729 01:06:59.046419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.438µs"
	I0729 01:06:59.057199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.482µs"
	I0729 01:07:00.756424       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.151709ms"
	I0729 01:07:00.756571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.993µs"
	I0729 01:07:01.930100       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.212087ms"
	I0729 01:07:01.930424       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="129.955µs"
	I0729 01:07:02.062845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.844485ms"
	I0729 01:07:02.063913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.032µs"
	E0729 01:07:36.706535       1 certificate_controller.go:146] Sync csr-4grvg failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-4grvg": the object has been modified; please apply your changes to the latest version and try again
	E0729 01:07:36.733899       1 certificate_controller.go:146] Sync csr-4grvg failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-4grvg": the object has been modified; please apply your changes to the latest version and try again
	I0729 01:07:36.982348       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-845088-m04\" does not exist"
	I0729 01:07:37.025140       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-845088-m04" podCIDRs=["10.244.3.0/24"]
	I0729 01:07:38.805985       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-845088-m04"
	I0729 01:07:57.447108       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-845088-m04"
	I0729 01:08:58.851112       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-845088-m04"
	I0729 01:08:59.008338       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.730323ms"
	I0729 01:08:59.008459       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.148µs"
	
	
	==> kube-proxy [ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8] <==
	I0729 01:04:10.440856       1 server_linux.go:69] "Using iptables proxy"
	I0729 01:04:10.458819       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	I0729 01:04:10.509960       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 01:04:10.510100       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 01:04:10.510134       1 server_linux.go:165] "Using iptables Proxier"
	I0729 01:04:10.513768       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 01:04:10.514374       1 server.go:872] "Version info" version="v1.30.3"
	I0729 01:04:10.514479       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:04:10.516370       1 config.go:192] "Starting service config controller"
	I0729 01:04:10.516560       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 01:04:10.516607       1 config.go:101] "Starting endpoint slice config controller"
	I0729 01:04:10.516625       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 01:04:10.519592       1 config.go:319] "Starting node config controller"
	I0729 01:04:10.519693       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 01:04:10.617213       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 01:04:10.617252       1 shared_informer.go:320] Caches are synced for service config
	I0729 01:04:10.619851       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60] <==
	W0729 01:03:54.330161       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 01:03:54.330212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 01:03:54.351760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 01:03:54.351883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 01:03:54.367219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 01:03:54.367313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 01:03:54.423091       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 01:03:54.423258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 01:03:54.559664       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 01:03:54.559712       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0729 01:03:56.609195       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 01:07:37.083711       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rffd2\": pod kindnet-rffd2 is already assigned to node \"ha-845088-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rffd2" node="ha-845088-m04"
	E0729 01:07:37.083959       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 06c4010c-e52d-4782-8c8d-05b8aed68ae1(kube-system/kindnet-rffd2) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rffd2"
	E0729 01:07:37.083992       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rffd2\": pod kindnet-rffd2 is already assigned to node \"ha-845088-m04\"" pod="kube-system/kindnet-rffd2"
	I0729 01:07:37.084075       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rffd2" node="ha-845088-m04"
	E0729 01:07:37.098504       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-zsmqf\": pod kube-proxy-zsmqf is already assigned to node \"ha-845088-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-zsmqf" node="ha-845088-m04"
	E0729 01:07:37.098571       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2ba8ef1a-2849-40e5-b08d-a44513494774(kube-system/kube-proxy-zsmqf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-zsmqf"
	E0729 01:07:37.098594       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zsmqf\": pod kube-proxy-zsmqf is already assigned to node \"ha-845088-m04\"" pod="kube-system/kube-proxy-zsmqf"
	I0729 01:07:37.098636       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-zsmqf" node="ha-845088-m04"
	E0729 01:07:37.177814       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x248x\": pod kindnet-x248x is already assigned to node \"ha-845088-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-x248x" node="ha-845088-m04"
	E0729 01:07:37.177910       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x248x\": pod kindnet-x248x is already assigned to node \"ha-845088-m04\"" pod="kube-system/kindnet-x248x"
	E0729 01:07:38.118636       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-bbp9f\": pod kube-proxy-bbp9f is already assigned to node \"ha-845088-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-bbp9f" node="ha-845088-m04"
	E0729 01:07:38.118713       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 5917b1fe-1ae9-4713-9760-1dc324ac52d3(kube-system/kube-proxy-bbp9f) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-bbp9f"
	E0729 01:07:38.118752       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-bbp9f\": pod kube-proxy-bbp9f is already assigned to node \"ha-845088-m04\"" pod="kube-system/kube-proxy-bbp9f"
	I0729 01:07:38.118774       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-bbp9f" node="ha-845088-m04"
	
	
	==> kubelet <==
	Jul 29 01:05:56 ha-845088 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:05:56 ha-845088 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:06:56 ha-845088 kubelet[1372]: E0729 01:06:56.147058    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:06:56 ha-845088 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:06:56 ha-845088 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:06:56 ha-845088 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:06:56 ha-845088 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:06:57 ha-845088 kubelet[1372]: I0729 01:06:57.610708    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-x4jjj" podStartSLOduration=168.610584009 podStartE2EDuration="2m48.610584009s" podCreationTimestamp="2024-07-29 01:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-29 01:04:28.358229614 +0000 UTC m=+32.399002888" watchObservedRunningTime="2024-07-29 01:06:57.610584009 +0000 UTC m=+181.651357290"
	Jul 29 01:06:57 ha-845088 kubelet[1372]: I0729 01:06:57.612613    1372 topology_manager.go:215] "Topology Admit Handler" podUID="3d626cc7-0294-43eb-903b-83ee7ea03f3d" podNamespace="default" podName="busybox-fc5497c4f-kdxhf"
	Jul 29 01:06:57 ha-845088 kubelet[1372]: I0729 01:06:57.718186    1372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n4lp\" (UniqueName: \"kubernetes.io/projected/3d626cc7-0294-43eb-903b-83ee7ea03f3d-kube-api-access-6n4lp\") pod \"busybox-fc5497c4f-kdxhf\" (UID: \"3d626cc7-0294-43eb-903b-83ee7ea03f3d\") " pod="default/busybox-fc5497c4f-kdxhf"
	Jul 29 01:07:56 ha-845088 kubelet[1372]: E0729 01:07:56.143984    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:07:56 ha-845088 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:07:56 ha-845088 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:07:56 ha-845088 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:07:56 ha-845088 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:08:56 ha-845088 kubelet[1372]: E0729 01:08:56.145752    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:08:56 ha-845088 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:08:56 ha-845088 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:08:56 ha-845088 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:08:56 ha-845088 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:09:56 ha-845088 kubelet[1372]: E0729 01:09:56.145167    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:09:56 ha-845088 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:09:56 ha-845088 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:09:56 ha-845088 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:09:56 ha-845088 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-845088 -n ha-845088
helpers_test.go:261: (dbg) Run:  kubectl --context ha-845088 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (55.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr: exit status 3 (3.200982166s)

                                                
                                                
-- stdout --
	ha-845088
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-845088-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:10:43.373538   32409 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:10:43.373641   32409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:10:43.373649   32409 out.go:304] Setting ErrFile to fd 2...
	I0729 01:10:43.373654   32409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:10:43.373838   32409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:10:43.373981   32409 out.go:298] Setting JSON to false
	I0729 01:10:43.374003   32409 mustload.go:65] Loading cluster: ha-845088
	I0729 01:10:43.374045   32409 notify.go:220] Checking for updates...
	I0729 01:10:43.374343   32409 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:10:43.374356   32409 status.go:255] checking status of ha-845088 ...
	I0729 01:10:43.374700   32409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:43.374752   32409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:43.394129   32409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I0729 01:10:43.394583   32409 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:43.395264   32409 main.go:141] libmachine: Using API Version  1
	I0729 01:10:43.395303   32409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:43.395646   32409 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:43.395848   32409 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:10:43.397419   32409 status.go:330] ha-845088 host status = "Running" (err=<nil>)
	I0729 01:10:43.397436   32409 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:10:43.397706   32409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:43.397744   32409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:43.412304   32409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32771
	I0729 01:10:43.412767   32409 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:43.413237   32409 main.go:141] libmachine: Using API Version  1
	I0729 01:10:43.413285   32409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:43.413643   32409 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:43.413813   32409 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:10:43.416335   32409 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:10:43.416676   32409 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:10:43.416698   32409 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:10:43.416853   32409 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:10:43.417152   32409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:43.417191   32409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:43.432078   32409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37407
	I0729 01:10:43.432474   32409 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:43.432907   32409 main.go:141] libmachine: Using API Version  1
	I0729 01:10:43.432927   32409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:43.433215   32409 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:43.433437   32409 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:10:43.433604   32409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:10:43.433630   32409 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:10:43.436409   32409 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:10:43.436799   32409 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:10:43.436820   32409 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:10:43.436945   32409 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:10:43.437176   32409 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:10:43.437387   32409 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:10:43.437550   32409 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:10:43.515114   32409 ssh_runner.go:195] Run: systemctl --version
	I0729 01:10:43.521045   32409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:10:43.536986   32409 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:10:43.537016   32409 api_server.go:166] Checking apiserver status ...
	I0729 01:10:43.537052   32409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:10:43.551384   32409 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0729 01:10:43.561829   32409 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:10:43.561881   32409 ssh_runner.go:195] Run: ls
	I0729 01:10:43.566820   32409 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:10:43.570637   32409 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:10:43.570661   32409 status.go:422] ha-845088 apiserver status = Running (err=<nil>)
	I0729 01:10:43.570672   32409 status.go:257] ha-845088 status: &{Name:ha-845088 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:10:43.570702   32409 status.go:255] checking status of ha-845088-m02 ...
	I0729 01:10:43.571067   32409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:43.571109   32409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:43.585612   32409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39989
	I0729 01:10:43.586028   32409 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:43.586523   32409 main.go:141] libmachine: Using API Version  1
	I0729 01:10:43.586541   32409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:43.586797   32409 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:43.586973   32409 main.go:141] libmachine: (ha-845088-m02) Calling .GetState
	I0729 01:10:43.588395   32409 status.go:330] ha-845088-m02 host status = "Running" (err=<nil>)
	I0729 01:10:43.588411   32409 host.go:66] Checking if "ha-845088-m02" exists ...
	I0729 01:10:43.588673   32409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:43.588702   32409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:43.603656   32409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I0729 01:10:43.604091   32409 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:43.604522   32409 main.go:141] libmachine: Using API Version  1
	I0729 01:10:43.604543   32409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:43.604808   32409 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:43.604987   32409 main.go:141] libmachine: (ha-845088-m02) Calling .GetIP
	I0729 01:10:43.607886   32409 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:10:43.608340   32409 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:10:43.608379   32409 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:10:43.608493   32409 host.go:66] Checking if "ha-845088-m02" exists ...
	I0729 01:10:43.608800   32409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:43.608838   32409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:43.623300   32409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39791
	I0729 01:10:43.623867   32409 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:43.624375   32409 main.go:141] libmachine: Using API Version  1
	I0729 01:10:43.624394   32409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:43.624684   32409 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:43.624856   32409 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:10:43.625050   32409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:10:43.625072   32409 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:10:43.627679   32409 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:10:43.628079   32409 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:10:43.628124   32409 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:10:43.628211   32409 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:10:43.628361   32409 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:10:43.628554   32409 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:10:43.628720   32409 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa Username:docker}
	W0729 01:10:46.179291   32409 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0729 01:10:46.179403   32409 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0729 01:10:46.179427   32409 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0729 01:10:46.179440   32409 status.go:257] ha-845088-m02 status: &{Name:ha-845088-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 01:10:46.179463   32409 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0729 01:10:46.179475   32409 status.go:255] checking status of ha-845088-m03 ...
	I0729 01:10:46.179782   32409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:46.179832   32409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:46.194769   32409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37971
	I0729 01:10:46.195157   32409 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:46.195691   32409 main.go:141] libmachine: Using API Version  1
	I0729 01:10:46.195707   32409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:46.196073   32409 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:46.196304   32409 main.go:141] libmachine: (ha-845088-m03) Calling .GetState
	I0729 01:10:46.197953   32409 status.go:330] ha-845088-m03 host status = "Running" (err=<nil>)
	I0729 01:10:46.197968   32409 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:10:46.198278   32409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:46.198326   32409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:46.213077   32409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40733
	I0729 01:10:46.213496   32409 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:46.213940   32409 main.go:141] libmachine: Using API Version  1
	I0729 01:10:46.213961   32409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:46.214201   32409 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:46.214334   32409 main.go:141] libmachine: (ha-845088-m03) Calling .GetIP
	I0729 01:10:46.216976   32409 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:10:46.217434   32409 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:10:46.217465   32409 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:10:46.217601   32409 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:10:46.217982   32409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:46.218018   32409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:46.232948   32409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39869
	I0729 01:10:46.233310   32409 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:46.233728   32409 main.go:141] libmachine: Using API Version  1
	I0729 01:10:46.233746   32409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:46.234044   32409 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:46.234214   32409 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:10:46.234404   32409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:10:46.234432   32409 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:10:46.237186   32409 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:10:46.237526   32409 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:10:46.237552   32409 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:10:46.237682   32409 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:10:46.237831   32409 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:10:46.237988   32409 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:10:46.238156   32409 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:10:46.323268   32409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:10:46.339000   32409 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:10:46.339027   32409 api_server.go:166] Checking apiserver status ...
	I0729 01:10:46.339095   32409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:10:46.353645   32409 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup
	W0729 01:10:46.363564   32409 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:10:46.363631   32409 ssh_runner.go:195] Run: ls
	I0729 01:10:46.368652   32409 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:10:46.372986   32409 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:10:46.373008   32409 status.go:422] ha-845088-m03 apiserver status = Running (err=<nil>)
	I0729 01:10:46.373016   32409 status.go:257] ha-845088-m03 status: &{Name:ha-845088-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:10:46.373034   32409 status.go:255] checking status of ha-845088-m04 ...
	I0729 01:10:46.373387   32409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:46.373432   32409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:46.389402   32409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35511
	I0729 01:10:46.389781   32409 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:46.390277   32409 main.go:141] libmachine: Using API Version  1
	I0729 01:10:46.390302   32409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:46.390611   32409 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:46.390822   32409 main.go:141] libmachine: (ha-845088-m04) Calling .GetState
	I0729 01:10:46.392548   32409 status.go:330] ha-845088-m04 host status = "Running" (err=<nil>)
	I0729 01:10:46.392564   32409 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:10:46.392938   32409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:46.392986   32409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:46.407761   32409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40739
	I0729 01:10:46.408089   32409 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:46.408496   32409 main.go:141] libmachine: Using API Version  1
	I0729 01:10:46.408518   32409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:46.408783   32409 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:46.408974   32409 main.go:141] libmachine: (ha-845088-m04) Calling .GetIP
	I0729 01:10:46.411752   32409 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:10:46.412120   32409 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:10:46.412158   32409 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:10:46.412302   32409 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:10:46.412586   32409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:46.412619   32409 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:46.426816   32409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34231
	I0729 01:10:46.427240   32409 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:46.427658   32409 main.go:141] libmachine: Using API Version  1
	I0729 01:10:46.427678   32409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:46.427955   32409 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:46.428129   32409 main.go:141] libmachine: (ha-845088-m04) Calling .DriverName
	I0729 01:10:46.428272   32409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:10:46.428288   32409 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHHostname
	I0729 01:10:46.431078   32409 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:10:46.431526   32409 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:10:46.431565   32409 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:10:46.431668   32409 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHPort
	I0729 01:10:46.431815   32409 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHKeyPath
	I0729 01:10:46.431953   32409 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHUsername
	I0729 01:10:46.432174   32409 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m04/id_rsa Username:docker}
	I0729 01:10:46.519077   32409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:10:46.533792   32409 status.go:257] ha-845088-m04 status: &{Name:ha-845088-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr: exit status 3 (5.47222548s)

                                                
                                                
-- stdout --
	ha-845088
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-845088-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:10:47.245425   32508 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:10:47.245676   32508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:10:47.245685   32508 out.go:304] Setting ErrFile to fd 2...
	I0729 01:10:47.245690   32508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:10:47.245896   32508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:10:47.246099   32508 out.go:298] Setting JSON to false
	I0729 01:10:47.246127   32508 mustload.go:65] Loading cluster: ha-845088
	I0729 01:10:47.246235   32508 notify.go:220] Checking for updates...
	I0729 01:10:47.246627   32508 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:10:47.246645   32508 status.go:255] checking status of ha-845088 ...
	I0729 01:10:47.247101   32508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:47.247180   32508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:47.266801   32508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42463
	I0729 01:10:47.267339   32508 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:47.268053   32508 main.go:141] libmachine: Using API Version  1
	I0729 01:10:47.268095   32508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:47.268427   32508 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:47.268764   32508 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:10:47.270545   32508 status.go:330] ha-845088 host status = "Running" (err=<nil>)
	I0729 01:10:47.270560   32508 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:10:47.270877   32508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:47.270917   32508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:47.285356   32508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33839
	I0729 01:10:47.285719   32508 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:47.286253   32508 main.go:141] libmachine: Using API Version  1
	I0729 01:10:47.286272   32508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:47.286564   32508 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:47.286767   32508 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:10:47.289529   32508 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:10:47.289935   32508 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:10:47.289968   32508 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:10:47.290072   32508 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:10:47.290393   32508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:47.290429   32508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:47.304778   32508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32801
	I0729 01:10:47.305183   32508 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:47.305716   32508 main.go:141] libmachine: Using API Version  1
	I0729 01:10:47.305752   32508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:47.306053   32508 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:47.306240   32508 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:10:47.306446   32508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:10:47.306485   32508 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:10:47.309192   32508 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:10:47.309585   32508 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:10:47.309621   32508 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:10:47.309809   32508 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:10:47.309990   32508 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:10:47.310135   32508 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:10:47.310280   32508 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:10:47.387458   32508 ssh_runner.go:195] Run: systemctl --version
	I0729 01:10:47.394904   32508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:10:47.412267   32508 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:10:47.412303   32508 api_server.go:166] Checking apiserver status ...
	I0729 01:10:47.412341   32508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:10:47.426205   32508 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0729 01:10:47.435575   32508 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:10:47.435630   32508 ssh_runner.go:195] Run: ls
	I0729 01:10:47.441632   32508 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:10:47.445690   32508 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:10:47.445713   32508 status.go:422] ha-845088 apiserver status = Running (err=<nil>)
	I0729 01:10:47.445723   32508 status.go:257] ha-845088 status: &{Name:ha-845088 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:10:47.445738   32508 status.go:255] checking status of ha-845088-m02 ...
	I0729 01:10:47.446091   32508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:47.446124   32508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:47.461176   32508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36279
	I0729 01:10:47.461631   32508 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:47.462144   32508 main.go:141] libmachine: Using API Version  1
	I0729 01:10:47.462162   32508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:47.462575   32508 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:47.462758   32508 main.go:141] libmachine: (ha-845088-m02) Calling .GetState
	I0729 01:10:47.464348   32508 status.go:330] ha-845088-m02 host status = "Running" (err=<nil>)
	I0729 01:10:47.464365   32508 host.go:66] Checking if "ha-845088-m02" exists ...
	I0729 01:10:47.464654   32508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:47.464684   32508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:47.480534   32508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41497
	I0729 01:10:47.480902   32508 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:47.481355   32508 main.go:141] libmachine: Using API Version  1
	I0729 01:10:47.481375   32508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:47.481646   32508 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:47.481826   32508 main.go:141] libmachine: (ha-845088-m02) Calling .GetIP
	I0729 01:10:47.484540   32508 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:10:47.484993   32508 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:10:47.485020   32508 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:10:47.485145   32508 host.go:66] Checking if "ha-845088-m02" exists ...
	I0729 01:10:47.485420   32508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:47.485453   32508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:47.499642   32508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42199
	I0729 01:10:47.500074   32508 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:47.500566   32508 main.go:141] libmachine: Using API Version  1
	I0729 01:10:47.500586   32508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:47.500916   32508 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:47.501111   32508 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:10:47.501281   32508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:10:47.501303   32508 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:10:47.503621   32508 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:10:47.504017   32508 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:10:47.504034   32508 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:10:47.504180   32508 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:10:47.504332   32508 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:10:47.504489   32508 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:10:47.504616   32508 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa Username:docker}
	W0729 01:10:49.251316   32508 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	I0729 01:10:49.251366   32508 retry.go:31] will retry after 242.539064ms: dial tcp 192.168.39.68:22: connect: no route to host
	W0729 01:10:52.323349   32508 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0729 01:10:52.323454   32508 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0729 01:10:52.323474   32508 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0729 01:10:52.323481   32508 status.go:257] ha-845088-m02 status: &{Name:ha-845088-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 01:10:52.323511   32508 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0729 01:10:52.323525   32508 status.go:255] checking status of ha-845088-m03 ...
	I0729 01:10:52.323957   32508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:52.324001   32508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:52.338532   32508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37979
	I0729 01:10:52.338982   32508 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:52.339500   32508 main.go:141] libmachine: Using API Version  1
	I0729 01:10:52.339534   32508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:52.339812   32508 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:52.339977   32508 main.go:141] libmachine: (ha-845088-m03) Calling .GetState
	I0729 01:10:52.341427   32508 status.go:330] ha-845088-m03 host status = "Running" (err=<nil>)
	I0729 01:10:52.341445   32508 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:10:52.341812   32508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:52.341855   32508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:52.356864   32508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42289
	I0729 01:10:52.357343   32508 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:52.357796   32508 main.go:141] libmachine: Using API Version  1
	I0729 01:10:52.357814   32508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:52.358124   32508 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:52.358278   32508 main.go:141] libmachine: (ha-845088-m03) Calling .GetIP
	I0729 01:10:52.361051   32508 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:10:52.361576   32508 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:10:52.361601   32508 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:10:52.361769   32508 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:10:52.362063   32508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:52.362101   32508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:52.377263   32508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33957
	I0729 01:10:52.377669   32508 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:52.378173   32508 main.go:141] libmachine: Using API Version  1
	I0729 01:10:52.378198   32508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:52.378495   32508 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:52.378673   32508 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:10:52.378829   32508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:10:52.378851   32508 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:10:52.381372   32508 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:10:52.381724   32508 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:10:52.381748   32508 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:10:52.381881   32508 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:10:52.382066   32508 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:10:52.382199   32508 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:10:52.382354   32508 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:10:52.467254   32508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:10:52.483346   32508 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:10:52.483370   32508 api_server.go:166] Checking apiserver status ...
	I0729 01:10:52.483420   32508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:10:52.498414   32508 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup
	W0729 01:10:52.507952   32508 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:10:52.508001   32508 ssh_runner.go:195] Run: ls
	I0729 01:10:52.512500   32508 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:10:52.519297   32508 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:10:52.519318   32508 status.go:422] ha-845088-m03 apiserver status = Running (err=<nil>)
	I0729 01:10:52.519326   32508 status.go:257] ha-845088-m03 status: &{Name:ha-845088-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:10:52.519339   32508 status.go:255] checking status of ha-845088-m04 ...
	I0729 01:10:52.519682   32508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:52.519721   32508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:52.535568   32508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37935
	I0729 01:10:52.535953   32508 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:52.536375   32508 main.go:141] libmachine: Using API Version  1
	I0729 01:10:52.536397   32508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:52.536759   32508 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:52.536961   32508 main.go:141] libmachine: (ha-845088-m04) Calling .GetState
	I0729 01:10:52.538529   32508 status.go:330] ha-845088-m04 host status = "Running" (err=<nil>)
	I0729 01:10:52.538543   32508 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:10:52.538926   32508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:52.538973   32508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:52.553620   32508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33305
	I0729 01:10:52.554163   32508 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:52.554615   32508 main.go:141] libmachine: Using API Version  1
	I0729 01:10:52.554642   32508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:52.554992   32508 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:52.555233   32508 main.go:141] libmachine: (ha-845088-m04) Calling .GetIP
	I0729 01:10:52.558396   32508 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:10:52.558991   32508 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:10:52.559031   32508 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:10:52.559193   32508 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:10:52.559503   32508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:52.559545   32508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:52.574724   32508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44701
	I0729 01:10:52.575142   32508 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:52.575615   32508 main.go:141] libmachine: Using API Version  1
	I0729 01:10:52.575635   32508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:52.575929   32508 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:52.576144   32508 main.go:141] libmachine: (ha-845088-m04) Calling .DriverName
	I0729 01:10:52.576305   32508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:10:52.576322   32508 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHHostname
	I0729 01:10:52.579138   32508 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:10:52.579562   32508 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:10:52.579584   32508 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:10:52.579703   32508 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHPort
	I0729 01:10:52.579896   32508 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHKeyPath
	I0729 01:10:52.580038   32508 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHUsername
	I0729 01:10:52.580189   32508 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m04/id_rsa Username:docker}
	I0729 01:10:52.662807   32508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:10:52.677116   32508 status.go:257] ha-845088-m04 status: &{Name:ha-845088-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr: exit status 3 (4.479081789s)

                                                
                                                
-- stdout --
	ha-845088
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-845088-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:10:54.619942   32625 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:10:54.620439   32625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:10:54.620453   32625 out.go:304] Setting ErrFile to fd 2...
	I0729 01:10:54.620460   32625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:10:54.620933   32625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:10:54.621422   32625 out.go:298] Setting JSON to false
	I0729 01:10:54.621457   32625 mustload.go:65] Loading cluster: ha-845088
	I0729 01:10:54.621539   32625 notify.go:220] Checking for updates...
	I0729 01:10:54.621924   32625 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:10:54.621942   32625 status.go:255] checking status of ha-845088 ...
	I0729 01:10:54.622335   32625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:54.622379   32625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:54.636815   32625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41167
	I0729 01:10:54.637225   32625 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:54.637765   32625 main.go:141] libmachine: Using API Version  1
	I0729 01:10:54.637803   32625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:54.638131   32625 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:54.638307   32625 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:10:54.639753   32625 status.go:330] ha-845088 host status = "Running" (err=<nil>)
	I0729 01:10:54.639768   32625 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:10:54.640069   32625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:54.640107   32625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:54.654162   32625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40709
	I0729 01:10:54.654525   32625 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:54.654989   32625 main.go:141] libmachine: Using API Version  1
	I0729 01:10:54.655011   32625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:54.655399   32625 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:54.655615   32625 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:10:54.658270   32625 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:10:54.658640   32625 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:10:54.658673   32625 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:10:54.658775   32625 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:10:54.659245   32625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:54.659288   32625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:54.674730   32625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44559
	I0729 01:10:54.675112   32625 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:54.675510   32625 main.go:141] libmachine: Using API Version  1
	I0729 01:10:54.675532   32625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:54.675876   32625 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:54.676077   32625 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:10:54.676291   32625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:10:54.676316   32625 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:10:54.678851   32625 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:10:54.679361   32625 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:10:54.679395   32625 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:10:54.679728   32625 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:10:54.679888   32625 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:10:54.680056   32625 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:10:54.680211   32625 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:10:54.758833   32625 ssh_runner.go:195] Run: systemctl --version
	I0729 01:10:54.767203   32625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:10:54.782720   32625 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:10:54.782745   32625 api_server.go:166] Checking apiserver status ...
	I0729 01:10:54.782776   32625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:10:54.797829   32625 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0729 01:10:54.810838   32625 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:10:54.810882   32625 ssh_runner.go:195] Run: ls
	I0729 01:10:54.818005   32625 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:10:54.825732   32625 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:10:54.825763   32625 status.go:422] ha-845088 apiserver status = Running (err=<nil>)
	I0729 01:10:54.825777   32625 status.go:257] ha-845088 status: &{Name:ha-845088 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:10:54.825806   32625 status.go:255] checking status of ha-845088-m02 ...
	I0729 01:10:54.826154   32625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:54.826198   32625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:54.841666   32625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41239
	I0729 01:10:54.842048   32625 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:54.842546   32625 main.go:141] libmachine: Using API Version  1
	I0729 01:10:54.842567   32625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:54.842967   32625 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:54.843187   32625 main.go:141] libmachine: (ha-845088-m02) Calling .GetState
	I0729 01:10:54.844917   32625 status.go:330] ha-845088-m02 host status = "Running" (err=<nil>)
	I0729 01:10:54.844934   32625 host.go:66] Checking if "ha-845088-m02" exists ...
	I0729 01:10:54.845250   32625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:54.845312   32625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:54.860901   32625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32953
	I0729 01:10:54.861324   32625 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:54.861803   32625 main.go:141] libmachine: Using API Version  1
	I0729 01:10:54.861827   32625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:54.862105   32625 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:54.862259   32625 main.go:141] libmachine: (ha-845088-m02) Calling .GetIP
	I0729 01:10:54.864934   32625 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:10:54.865326   32625 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:10:54.865366   32625 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:10:54.865498   32625 host.go:66] Checking if "ha-845088-m02" exists ...
	I0729 01:10:54.865781   32625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:54.865815   32625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:54.880852   32625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34977
	I0729 01:10:54.881338   32625 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:54.881747   32625 main.go:141] libmachine: Using API Version  1
	I0729 01:10:54.881759   32625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:54.882033   32625 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:54.882200   32625 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:10:54.882365   32625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:10:54.882386   32625 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:10:54.885254   32625 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:10:54.885683   32625 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:10:54.885711   32625 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:10:54.885872   32625 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:10:54.886023   32625 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:10:54.886177   32625 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:10:54.886295   32625 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa Username:docker}
	W0729 01:10:55.391315   32625 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	I0729 01:10:55.391356   32625 retry.go:31] will retry after 222.105667ms: dial tcp 192.168.39.68:22: connect: no route to host
	W0729 01:10:58.687287   32625 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0729 01:10:58.687398   32625 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0729 01:10:58.687424   32625 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0729 01:10:58.687433   32625 status.go:257] ha-845088-m02 status: &{Name:ha-845088-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 01:10:58.687467   32625 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0729 01:10:58.687481   32625 status.go:255] checking status of ha-845088-m03 ...
	I0729 01:10:58.687811   32625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:58.687864   32625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:58.702379   32625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34675
	I0729 01:10:58.702761   32625 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:58.703241   32625 main.go:141] libmachine: Using API Version  1
	I0729 01:10:58.703263   32625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:58.703559   32625 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:58.703745   32625 main.go:141] libmachine: (ha-845088-m03) Calling .GetState
	I0729 01:10:58.705104   32625 status.go:330] ha-845088-m03 host status = "Running" (err=<nil>)
	I0729 01:10:58.705125   32625 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:10:58.705408   32625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:58.705441   32625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:58.720011   32625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I0729 01:10:58.720403   32625 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:58.720832   32625 main.go:141] libmachine: Using API Version  1
	I0729 01:10:58.720856   32625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:58.721164   32625 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:58.721361   32625 main.go:141] libmachine: (ha-845088-m03) Calling .GetIP
	I0729 01:10:58.724573   32625 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:10:58.725145   32625 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:10:58.725180   32625 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:10:58.725329   32625 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:10:58.725713   32625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:58.725756   32625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:58.739796   32625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46471
	I0729 01:10:58.740204   32625 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:58.740706   32625 main.go:141] libmachine: Using API Version  1
	I0729 01:10:58.740730   32625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:58.741070   32625 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:58.741303   32625 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:10:58.741487   32625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:10:58.741505   32625 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:10:58.744081   32625 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:10:58.744466   32625 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:10:58.744487   32625 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:10:58.744642   32625 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:10:58.744809   32625 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:10:58.744979   32625 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:10:58.745111   32625 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:10:58.831736   32625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:10:58.849716   32625 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:10:58.849753   32625 api_server.go:166] Checking apiserver status ...
	I0729 01:10:58.849791   32625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:10:58.872213   32625 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup
	W0729 01:10:58.882491   32625 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:10:58.882546   32625 ssh_runner.go:195] Run: ls
	I0729 01:10:58.886847   32625 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:10:58.890982   32625 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:10:58.891000   32625 status.go:422] ha-845088-m03 apiserver status = Running (err=<nil>)
	I0729 01:10:58.891008   32625 status.go:257] ha-845088-m03 status: &{Name:ha-845088-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:10:58.891023   32625 status.go:255] checking status of ha-845088-m04 ...
	I0729 01:10:58.891336   32625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:58.891370   32625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:58.906675   32625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37761
	I0729 01:10:58.907092   32625 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:58.907600   32625 main.go:141] libmachine: Using API Version  1
	I0729 01:10:58.907636   32625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:58.908039   32625 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:58.908257   32625 main.go:141] libmachine: (ha-845088-m04) Calling .GetState
	I0729 01:10:58.909902   32625 status.go:330] ha-845088-m04 host status = "Running" (err=<nil>)
	I0729 01:10:58.909917   32625 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:10:58.910366   32625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:58.910413   32625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:58.925629   32625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38271
	I0729 01:10:58.926031   32625 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:58.926462   32625 main.go:141] libmachine: Using API Version  1
	I0729 01:10:58.926482   32625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:58.926788   32625 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:58.926970   32625 main.go:141] libmachine: (ha-845088-m04) Calling .GetIP
	I0729 01:10:58.929561   32625 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:10:58.929905   32625 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:10:58.929951   32625 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:10:58.930047   32625 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:10:58.930338   32625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:10:58.930369   32625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:10:58.945186   32625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38899
	I0729 01:10:58.945656   32625 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:10:58.946222   32625 main.go:141] libmachine: Using API Version  1
	I0729 01:10:58.946255   32625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:10:58.946628   32625 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:10:58.946826   32625 main.go:141] libmachine: (ha-845088-m04) Calling .DriverName
	I0729 01:10:58.947000   32625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:10:58.947021   32625 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHHostname
	I0729 01:10:58.950014   32625 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:10:58.950451   32625 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:10:58.950476   32625 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:10:58.950614   32625 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHPort
	I0729 01:10:58.950774   32625 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHKeyPath
	I0729 01:10:58.950986   32625 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHUsername
	I0729 01:10:58.951170   32625 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m04/id_rsa Username:docker}
	I0729 01:10:59.038888   32625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:10:59.056027   32625 status.go:257] ha-845088-m04 status: &{Name:ha-845088-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr: exit status 3 (4.57386011s)

                                                
                                                
-- stdout --
	ha-845088
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-845088-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:11:00.991878   32726 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:11:00.992002   32726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:11:00.992014   32726 out.go:304] Setting ErrFile to fd 2...
	I0729 01:11:00.992020   32726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:11:00.992210   32726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:11:00.992368   32726 out.go:298] Setting JSON to false
	I0729 01:11:00.992392   32726 mustload.go:65] Loading cluster: ha-845088
	I0729 01:11:00.992441   32726 notify.go:220] Checking for updates...
	I0729 01:11:00.992843   32726 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:11:00.992863   32726 status.go:255] checking status of ha-845088 ...
	I0729 01:11:00.993324   32726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:00.993357   32726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:01.012280   32726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35977
	I0729 01:11:01.012675   32726 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:01.013319   32726 main.go:141] libmachine: Using API Version  1
	I0729 01:11:01.013345   32726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:01.013658   32726 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:01.013869   32726 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:11:01.015511   32726 status.go:330] ha-845088 host status = "Running" (err=<nil>)
	I0729 01:11:01.015527   32726 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:11:01.015828   32726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:01.015874   32726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:01.031379   32726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38543
	I0729 01:11:01.031715   32726 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:01.032188   32726 main.go:141] libmachine: Using API Version  1
	I0729 01:11:01.032210   32726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:01.032572   32726 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:01.032718   32726 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:11:01.035557   32726 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:01.036003   32726 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:11:01.036036   32726 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:01.036178   32726 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:11:01.036476   32726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:01.036514   32726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:01.050772   32726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38483
	I0729 01:11:01.051248   32726 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:01.051719   32726 main.go:141] libmachine: Using API Version  1
	I0729 01:11:01.051740   32726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:01.052013   32726 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:01.052194   32726 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:11:01.052387   32726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:01.052425   32726 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:11:01.055115   32726 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:01.055549   32726 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:11:01.055568   32726 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:01.055720   32726 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:11:01.055885   32726 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:11:01.056032   32726 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:11:01.056165   32726 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:11:01.135938   32726 ssh_runner.go:195] Run: systemctl --version
	I0729 01:11:01.142236   32726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:11:01.162169   32726 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:11:01.162192   32726 api_server.go:166] Checking apiserver status ...
	I0729 01:11:01.162228   32726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:11:01.182420   32726 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0729 01:11:01.196905   32726 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:11:01.196970   32726 ssh_runner.go:195] Run: ls
	I0729 01:11:01.201653   32726 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:11:01.205703   32726 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:11:01.205721   32726 status.go:422] ha-845088 apiserver status = Running (err=<nil>)
	I0729 01:11:01.205730   32726 status.go:257] ha-845088 status: &{Name:ha-845088 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:11:01.205745   32726 status.go:255] checking status of ha-845088-m02 ...
	I0729 01:11:01.206043   32726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:01.206073   32726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:01.220712   32726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43647
	I0729 01:11:01.221146   32726 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:01.221672   32726 main.go:141] libmachine: Using API Version  1
	I0729 01:11:01.221691   32726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:01.222147   32726 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:01.222335   32726 main.go:141] libmachine: (ha-845088-m02) Calling .GetState
	I0729 01:11:01.223781   32726 status.go:330] ha-845088-m02 host status = "Running" (err=<nil>)
	I0729 01:11:01.223799   32726 host.go:66] Checking if "ha-845088-m02" exists ...
	I0729 01:11:01.224174   32726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:01.224211   32726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:01.238661   32726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44699
	I0729 01:11:01.239042   32726 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:01.239500   32726 main.go:141] libmachine: Using API Version  1
	I0729 01:11:01.239522   32726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:01.239870   32726 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:01.240063   32726 main.go:141] libmachine: (ha-845088-m02) Calling .GetIP
	I0729 01:11:01.242790   32726 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:11:01.243232   32726 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:11:01.243255   32726 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:11:01.243398   32726 host.go:66] Checking if "ha-845088-m02" exists ...
	I0729 01:11:01.243688   32726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:01.243726   32726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:01.258617   32726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41873
	I0729 01:11:01.259003   32726 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:01.259527   32726 main.go:141] libmachine: Using API Version  1
	I0729 01:11:01.259557   32726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:01.259904   32726 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:01.260108   32726 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:11:01.260337   32726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:01.260358   32726 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:11:01.263476   32726 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:11:01.264009   32726 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:11:01.264035   32726 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:11:01.264232   32726 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:11:01.264411   32726 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:11:01.264572   32726 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:11:01.264795   32726 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa Username:docker}
	W0729 01:11:01.763257   32726 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	I0729 01:11:01.763315   32726 retry.go:31] will retry after 317.969014ms: dial tcp 192.168.39.68:22: connect: no route to host
	W0729 01:11:05.155306   32726 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0729 01:11:05.155399   32726 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0729 01:11:05.155417   32726 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0729 01:11:05.155424   32726 status.go:257] ha-845088-m02 status: &{Name:ha-845088-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 01:11:05.155445   32726 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0729 01:11:05.155463   32726 status.go:255] checking status of ha-845088-m03 ...
	I0729 01:11:05.155739   32726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:05.155776   32726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:05.170702   32726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33675
	I0729 01:11:05.171132   32726 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:05.171571   32726 main.go:141] libmachine: Using API Version  1
	I0729 01:11:05.171594   32726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:05.172029   32726 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:05.172233   32726 main.go:141] libmachine: (ha-845088-m03) Calling .GetState
	I0729 01:11:05.173770   32726 status.go:330] ha-845088-m03 host status = "Running" (err=<nil>)
	I0729 01:11:05.173786   32726 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:11:05.174059   32726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:05.174098   32726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:05.188411   32726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I0729 01:11:05.188819   32726 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:05.189269   32726 main.go:141] libmachine: Using API Version  1
	I0729 01:11:05.189292   32726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:05.189619   32726 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:05.189838   32726 main.go:141] libmachine: (ha-845088-m03) Calling .GetIP
	I0729 01:11:05.193001   32726 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:05.193455   32726 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:11:05.193479   32726 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:05.193783   32726 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:11:05.194188   32726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:05.194234   32726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:05.208548   32726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41895
	I0729 01:11:05.208987   32726 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:05.209427   32726 main.go:141] libmachine: Using API Version  1
	I0729 01:11:05.209446   32726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:05.209720   32726 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:05.209890   32726 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:11:05.210053   32726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:05.210076   32726 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:11:05.212693   32726 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:05.213093   32726 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:11:05.213123   32726 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:05.213266   32726 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:11:05.213418   32726 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:11:05.213541   32726 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:11:05.213670   32726 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:11:05.301641   32726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:11:05.321735   32726 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:11:05.321769   32726 api_server.go:166] Checking apiserver status ...
	I0729 01:11:05.321804   32726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:11:05.341441   32726 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup
	W0729 01:11:05.352314   32726 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:11:05.352362   32726 ssh_runner.go:195] Run: ls
	I0729 01:11:05.357448   32726 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:11:05.363603   32726 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:11:05.363623   32726 status.go:422] ha-845088-m03 apiserver status = Running (err=<nil>)
	I0729 01:11:05.363630   32726 status.go:257] ha-845088-m03 status: &{Name:ha-845088-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:11:05.363644   32726 status.go:255] checking status of ha-845088-m04 ...
	I0729 01:11:05.363925   32726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:05.363953   32726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:05.378720   32726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40191
	I0729 01:11:05.379149   32726 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:05.379564   32726 main.go:141] libmachine: Using API Version  1
	I0729 01:11:05.379584   32726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:05.379878   32726 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:05.380071   32726 main.go:141] libmachine: (ha-845088-m04) Calling .GetState
	I0729 01:11:05.381547   32726 status.go:330] ha-845088-m04 host status = "Running" (err=<nil>)
	I0729 01:11:05.381561   32726 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:11:05.382088   32726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:05.382132   32726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:05.397643   32726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36443
	I0729 01:11:05.398030   32726 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:05.398422   32726 main.go:141] libmachine: Using API Version  1
	I0729 01:11:05.398439   32726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:05.398710   32726 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:05.398861   32726 main.go:141] libmachine: (ha-845088-m04) Calling .GetIP
	I0729 01:11:05.401568   32726 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:05.401962   32726 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:11:05.401994   32726 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:05.402102   32726 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:11:05.402386   32726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:05.402422   32726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:05.417839   32726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38377
	I0729 01:11:05.418226   32726 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:05.418784   32726 main.go:141] libmachine: Using API Version  1
	I0729 01:11:05.418808   32726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:05.419193   32726 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:05.419400   32726 main.go:141] libmachine: (ha-845088-m04) Calling .DriverName
	I0729 01:11:05.419592   32726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:05.419611   32726 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHHostname
	I0729 01:11:05.422550   32726 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:05.422969   32726 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:11:05.423006   32726 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:05.423146   32726 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHPort
	I0729 01:11:05.423306   32726 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHKeyPath
	I0729 01:11:05.423456   32726 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHUsername
	I0729 01:11:05.423590   32726 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m04/id_rsa Username:docker}
	I0729 01:11:05.510690   32726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:11:05.524426   32726 status.go:257] ha-845088-m04 status: &{Name:ha-845088-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr: exit status 3 (3.738760884s)

                                                
                                                
-- stdout --
	ha-845088
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-845088-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:11:08.434587   32826 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:11:08.434845   32826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:11:08.434857   32826 out.go:304] Setting ErrFile to fd 2...
	I0729 01:11:08.434864   32826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:11:08.435145   32826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:11:08.435301   32826 out.go:298] Setting JSON to false
	I0729 01:11:08.435324   32826 mustload.go:65] Loading cluster: ha-845088
	I0729 01:11:08.435455   32826 notify.go:220] Checking for updates...
	I0729 01:11:08.435806   32826 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:11:08.435825   32826 status.go:255] checking status of ha-845088 ...
	I0729 01:11:08.436341   32826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:08.436396   32826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:08.456482   32826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43607
	I0729 01:11:08.456941   32826 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:08.457481   32826 main.go:141] libmachine: Using API Version  1
	I0729 01:11:08.457501   32826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:08.457825   32826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:08.458019   32826 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:11:08.459779   32826 status.go:330] ha-845088 host status = "Running" (err=<nil>)
	I0729 01:11:08.459798   32826 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:11:08.460160   32826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:08.460193   32826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:08.475728   32826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39415
	I0729 01:11:08.476098   32826 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:08.476605   32826 main.go:141] libmachine: Using API Version  1
	I0729 01:11:08.476627   32826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:08.476890   32826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:08.477086   32826 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:11:08.479727   32826 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:08.480084   32826 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:11:08.480114   32826 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:08.480277   32826 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:11:08.480602   32826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:08.480658   32826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:08.494760   32826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44759
	I0729 01:11:08.495229   32826 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:08.495702   32826 main.go:141] libmachine: Using API Version  1
	I0729 01:11:08.495719   32826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:08.495995   32826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:08.496152   32826 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:11:08.496337   32826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:08.496361   32826 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:11:08.498850   32826 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:08.499313   32826 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:11:08.499344   32826 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:08.499477   32826 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:11:08.499622   32826 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:11:08.499832   32826 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:11:08.500001   32826 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:11:08.585022   32826 ssh_runner.go:195] Run: systemctl --version
	I0729 01:11:08.592531   32826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:11:08.609182   32826 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:11:08.609208   32826 api_server.go:166] Checking apiserver status ...
	I0729 01:11:08.609241   32826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:11:08.625435   32826 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0729 01:11:08.638273   32826 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:11:08.638321   32826 ssh_runner.go:195] Run: ls
	I0729 01:11:08.643125   32826 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:11:08.650523   32826 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:11:08.650549   32826 status.go:422] ha-845088 apiserver status = Running (err=<nil>)
	I0729 01:11:08.650558   32826 status.go:257] ha-845088 status: &{Name:ha-845088 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:11:08.650572   32826 status.go:255] checking status of ha-845088-m02 ...
	I0729 01:11:08.650887   32826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:08.650932   32826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:08.666312   32826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44821
	I0729 01:11:08.666723   32826 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:08.667198   32826 main.go:141] libmachine: Using API Version  1
	I0729 01:11:08.667235   32826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:08.667576   32826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:08.667776   32826 main.go:141] libmachine: (ha-845088-m02) Calling .GetState
	I0729 01:11:08.669604   32826 status.go:330] ha-845088-m02 host status = "Running" (err=<nil>)
	I0729 01:11:08.669620   32826 host.go:66] Checking if "ha-845088-m02" exists ...
	I0729 01:11:08.670042   32826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:08.670082   32826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:08.685588   32826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44153
	I0729 01:11:08.686001   32826 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:08.686442   32826 main.go:141] libmachine: Using API Version  1
	I0729 01:11:08.686462   32826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:08.686719   32826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:08.686905   32826 main.go:141] libmachine: (ha-845088-m02) Calling .GetIP
	I0729 01:11:08.689452   32826 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:11:08.689831   32826 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:11:08.689865   32826 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:11:08.690014   32826 host.go:66] Checking if "ha-845088-m02" exists ...
	I0729 01:11:08.690353   32826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:08.690389   32826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:08.705443   32826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I0729 01:11:08.705806   32826 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:08.706291   32826 main.go:141] libmachine: Using API Version  1
	I0729 01:11:08.706311   32826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:08.706591   32826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:08.706806   32826 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:11:08.707010   32826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:08.707036   32826 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:11:08.709644   32826 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:11:08.710057   32826 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:11:08.710090   32826 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:11:08.710168   32826 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:11:08.710353   32826 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:11:08.710488   32826 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:11:08.710623   32826 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa Username:docker}
	W0729 01:11:11.775296   32826 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0729 01:11:11.775421   32826 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0729 01:11:11.775442   32826 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0729 01:11:11.775454   32826 status.go:257] ha-845088-m02 status: &{Name:ha-845088-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 01:11:11.775471   32826 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0729 01:11:11.775479   32826 status.go:255] checking status of ha-845088-m03 ...
	I0729 01:11:11.775758   32826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:11.775801   32826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:11.790534   32826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
	I0729 01:11:11.790966   32826 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:11.791438   32826 main.go:141] libmachine: Using API Version  1
	I0729 01:11:11.791461   32826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:11.791739   32826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:11.791877   32826 main.go:141] libmachine: (ha-845088-m03) Calling .GetState
	I0729 01:11:11.793518   32826 status.go:330] ha-845088-m03 host status = "Running" (err=<nil>)
	I0729 01:11:11.793534   32826 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:11:11.793840   32826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:11.793874   32826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:11.808150   32826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35639
	I0729 01:11:11.808611   32826 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:11.809104   32826 main.go:141] libmachine: Using API Version  1
	I0729 01:11:11.809132   32826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:11.809394   32826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:11.809571   32826 main.go:141] libmachine: (ha-845088-m03) Calling .GetIP
	I0729 01:11:11.812006   32826 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:11.812448   32826 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:11:11.812475   32826 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:11.812591   32826 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:11:11.812964   32826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:11.813005   32826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:11.827918   32826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42467
	I0729 01:11:11.828359   32826 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:11.828965   32826 main.go:141] libmachine: Using API Version  1
	I0729 01:11:11.828993   32826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:11.829284   32826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:11.829474   32826 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:11:11.829675   32826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:11.829692   32826 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:11:11.832628   32826 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:11.833174   32826 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:11:11.833198   32826 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:11.833346   32826 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:11:11.833496   32826 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:11:11.833625   32826 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:11:11.833807   32826 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:11:11.921052   32826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:11:11.936764   32826 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:11:11.936799   32826 api_server.go:166] Checking apiserver status ...
	I0729 01:11:11.936841   32826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:11:11.950867   32826 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup
	W0729 01:11:11.961926   32826 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:11:11.961996   32826 ssh_runner.go:195] Run: ls
	I0729 01:11:11.966843   32826 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:11:11.972796   32826 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:11:11.972818   32826 status.go:422] ha-845088-m03 apiserver status = Running (err=<nil>)
	I0729 01:11:11.972827   32826 status.go:257] ha-845088-m03 status: &{Name:ha-845088-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:11:11.972850   32826 status.go:255] checking status of ha-845088-m04 ...
	I0729 01:11:11.973180   32826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:11.973225   32826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:11.987794   32826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0729 01:11:11.988236   32826 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:11.988698   32826 main.go:141] libmachine: Using API Version  1
	I0729 01:11:11.988722   32826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:11.989055   32826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:11.989254   32826 main.go:141] libmachine: (ha-845088-m04) Calling .GetState
	I0729 01:11:11.990703   32826 status.go:330] ha-845088-m04 host status = "Running" (err=<nil>)
	I0729 01:11:11.990718   32826 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:11:11.990992   32826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:11.991022   32826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:12.006414   32826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33965
	I0729 01:11:12.006802   32826 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:12.007302   32826 main.go:141] libmachine: Using API Version  1
	I0729 01:11:12.007341   32826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:12.007692   32826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:12.007868   32826 main.go:141] libmachine: (ha-845088-m04) Calling .GetIP
	I0729 01:11:12.010709   32826 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:12.011184   32826 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:11:12.011211   32826 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:12.011349   32826 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:11:12.011634   32826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:12.011679   32826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:12.026726   32826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41159
	I0729 01:11:12.027136   32826 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:12.027540   32826 main.go:141] libmachine: Using API Version  1
	I0729 01:11:12.027560   32826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:12.027868   32826 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:12.028050   32826 main.go:141] libmachine: (ha-845088-m04) Calling .DriverName
	I0729 01:11:12.028234   32826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:12.028251   32826 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHHostname
	I0729 01:11:12.030743   32826 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:12.031278   32826 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:11:12.031304   32826 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:12.031433   32826 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHPort
	I0729 01:11:12.031626   32826 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHKeyPath
	I0729 01:11:12.031793   32826 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHUsername
	I0729 01:11:12.031942   32826 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m04/id_rsa Username:docker}
	I0729 01:11:12.118470   32826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:11:12.132957   32826 status.go:257] ha-845088-m04 status: &{Name:ha-845088-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr: exit status 3 (3.735142064s)

                                                
                                                
-- stdout --
	ha-845088
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-845088-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:11:15.457160   32943 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:11:15.457429   32943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:11:15.457439   32943 out.go:304] Setting ErrFile to fd 2...
	I0729 01:11:15.457444   32943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:11:15.457625   32943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:11:15.457778   32943 out.go:298] Setting JSON to false
	I0729 01:11:15.457802   32943 mustload.go:65] Loading cluster: ha-845088
	I0729 01:11:15.457843   32943 notify.go:220] Checking for updates...
	I0729 01:11:15.458160   32943 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:11:15.458173   32943 status.go:255] checking status of ha-845088 ...
	I0729 01:11:15.458568   32943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:15.458656   32943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:15.477061   32943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44901
	I0729 01:11:15.477504   32943 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:15.478150   32943 main.go:141] libmachine: Using API Version  1
	I0729 01:11:15.478180   32943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:15.478545   32943 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:15.478730   32943 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:11:15.480260   32943 status.go:330] ha-845088 host status = "Running" (err=<nil>)
	I0729 01:11:15.480275   32943 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:11:15.480526   32943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:15.480555   32943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:15.494630   32943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0729 01:11:15.495006   32943 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:15.495408   32943 main.go:141] libmachine: Using API Version  1
	I0729 01:11:15.495432   32943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:15.495713   32943 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:15.495855   32943 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:11:15.498378   32943 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:15.498768   32943 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:11:15.498795   32943 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:15.499019   32943 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:11:15.499410   32943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:15.499453   32943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:15.513221   32943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I0729 01:11:15.513592   32943 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:15.514011   32943 main.go:141] libmachine: Using API Version  1
	I0729 01:11:15.514032   32943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:15.514320   32943 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:15.514522   32943 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:11:15.514703   32943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:15.514731   32943 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:11:15.517199   32943 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:15.517556   32943 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:11:15.517574   32943 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:15.517725   32943 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:11:15.517871   32943 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:11:15.518002   32943 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:11:15.518122   32943 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:11:15.599427   32943 ssh_runner.go:195] Run: systemctl --version
	I0729 01:11:15.605565   32943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:11:15.620857   32943 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:11:15.620891   32943 api_server.go:166] Checking apiserver status ...
	I0729 01:11:15.620923   32943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:11:15.636478   32943 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0729 01:11:15.649147   32943 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:11:15.649195   32943 ssh_runner.go:195] Run: ls
	I0729 01:11:15.654952   32943 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:11:15.660810   32943 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:11:15.660831   32943 status.go:422] ha-845088 apiserver status = Running (err=<nil>)
	I0729 01:11:15.660842   32943 status.go:257] ha-845088 status: &{Name:ha-845088 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:11:15.660861   32943 status.go:255] checking status of ha-845088-m02 ...
	I0729 01:11:15.661152   32943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:15.661190   32943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:15.676468   32943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
	I0729 01:11:15.676837   32943 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:15.677306   32943 main.go:141] libmachine: Using API Version  1
	I0729 01:11:15.677326   32943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:15.677670   32943 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:15.677867   32943 main.go:141] libmachine: (ha-845088-m02) Calling .GetState
	I0729 01:11:15.679363   32943 status.go:330] ha-845088-m02 host status = "Running" (err=<nil>)
	I0729 01:11:15.679379   32943 host.go:66] Checking if "ha-845088-m02" exists ...
	I0729 01:11:15.679657   32943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:15.679711   32943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:15.694577   32943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33619
	I0729 01:11:15.695005   32943 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:15.695474   32943 main.go:141] libmachine: Using API Version  1
	I0729 01:11:15.695493   32943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:15.695810   32943 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:15.695968   32943 main.go:141] libmachine: (ha-845088-m02) Calling .GetIP
	I0729 01:11:15.698359   32943 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:11:15.698712   32943 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:11:15.698748   32943 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:11:15.698916   32943 host.go:66] Checking if "ha-845088-m02" exists ...
	I0729 01:11:15.699219   32943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:15.699261   32943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:15.713441   32943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37931
	I0729 01:11:15.713837   32943 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:15.714279   32943 main.go:141] libmachine: Using API Version  1
	I0729 01:11:15.714298   32943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:15.714578   32943 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:15.714861   32943 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:11:15.715047   32943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:15.715094   32943 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:11:15.717885   32943 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:11:15.718292   32943 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:11:15.718318   32943 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:11:15.718458   32943 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:11:15.718595   32943 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:11:15.718763   32943 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:11:15.718889   32943 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa Username:docker}
	W0729 01:11:18.783338   32943 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0729 01:11:18.783428   32943 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0729 01:11:18.783444   32943 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0729 01:11:18.783451   32943 status.go:257] ha-845088-m02 status: &{Name:ha-845088-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 01:11:18.783466   32943 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0729 01:11:18.783474   32943 status.go:255] checking status of ha-845088-m03 ...
	I0729 01:11:18.783776   32943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:18.783813   32943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:18.798441   32943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46083
	I0729 01:11:18.798842   32943 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:18.799307   32943 main.go:141] libmachine: Using API Version  1
	I0729 01:11:18.799331   32943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:18.799705   32943 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:18.799934   32943 main.go:141] libmachine: (ha-845088-m03) Calling .GetState
	I0729 01:11:18.801509   32943 status.go:330] ha-845088-m03 host status = "Running" (err=<nil>)
	I0729 01:11:18.801524   32943 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:11:18.801908   32943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:18.801943   32943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:18.816144   32943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42611
	I0729 01:11:18.816506   32943 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:18.816913   32943 main.go:141] libmachine: Using API Version  1
	I0729 01:11:18.816939   32943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:18.817219   32943 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:18.817407   32943 main.go:141] libmachine: (ha-845088-m03) Calling .GetIP
	I0729 01:11:18.820125   32943 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:18.820540   32943 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:11:18.820561   32943 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:18.820744   32943 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:11:18.821026   32943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:18.821066   32943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:18.836147   32943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34089
	I0729 01:11:18.836491   32943 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:18.836917   32943 main.go:141] libmachine: Using API Version  1
	I0729 01:11:18.836945   32943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:18.837222   32943 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:18.837379   32943 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:11:18.837559   32943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:18.837577   32943 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:11:18.840032   32943 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:18.840425   32943 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:11:18.840462   32943 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:18.840597   32943 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:11:18.840737   32943 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:11:18.840848   32943 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:11:18.840936   32943 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:11:18.932198   32943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:11:18.950411   32943 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:11:18.950435   32943 api_server.go:166] Checking apiserver status ...
	I0729 01:11:18.950462   32943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:11:18.969773   32943 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup
	W0729 01:11:18.981010   32943 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:11:18.981069   32943 ssh_runner.go:195] Run: ls
	I0729 01:11:18.985606   32943 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:11:18.990020   32943 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:11:18.990040   32943 status.go:422] ha-845088-m03 apiserver status = Running (err=<nil>)
	I0729 01:11:18.990048   32943 status.go:257] ha-845088-m03 status: &{Name:ha-845088-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:11:18.990070   32943 status.go:255] checking status of ha-845088-m04 ...
	I0729 01:11:18.990350   32943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:18.990397   32943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:19.004729   32943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I0729 01:11:19.005182   32943 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:19.005631   32943 main.go:141] libmachine: Using API Version  1
	I0729 01:11:19.005650   32943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:19.006038   32943 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:19.006229   32943 main.go:141] libmachine: (ha-845088-m04) Calling .GetState
	I0729 01:11:19.007673   32943 status.go:330] ha-845088-m04 host status = "Running" (err=<nil>)
	I0729 01:11:19.007690   32943 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:11:19.007962   32943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:19.007995   32943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:19.021787   32943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39881
	I0729 01:11:19.022189   32943 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:19.022600   32943 main.go:141] libmachine: Using API Version  1
	I0729 01:11:19.022618   32943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:19.022890   32943 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:19.023070   32943 main.go:141] libmachine: (ha-845088-m04) Calling .GetIP
	I0729 01:11:19.025785   32943 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:19.026281   32943 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:11:19.026312   32943 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:19.026434   32943 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:11:19.026775   32943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:19.026816   32943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:19.042074   32943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36275
	I0729 01:11:19.042485   32943 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:19.042936   32943 main.go:141] libmachine: Using API Version  1
	I0729 01:11:19.042958   32943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:19.043400   32943 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:19.043615   32943 main.go:141] libmachine: (ha-845088-m04) Calling .DriverName
	I0729 01:11:19.043792   32943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:19.043819   32943 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHHostname
	I0729 01:11:19.047046   32943 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:19.047569   32943 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:11:19.047599   32943 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:19.047763   32943 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHPort
	I0729 01:11:19.047953   32943 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHKeyPath
	I0729 01:11:19.048156   32943 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHUsername
	I0729 01:11:19.048319   32943 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m04/id_rsa Username:docker}
	I0729 01:11:19.134834   32943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:11:19.150533   32943 status.go:257] ha-845088-m04 status: &{Name:ha-845088-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr: exit status 7 (629.682311ms)

                                                
                                                
-- stdout --
	ha-845088
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-845088-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:11:24.956243   33078 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:11:24.956487   33078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:11:24.956497   33078 out.go:304] Setting ErrFile to fd 2...
	I0729 01:11:24.956501   33078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:11:24.956662   33078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:11:24.956810   33078 out.go:298] Setting JSON to false
	I0729 01:11:24.956833   33078 mustload.go:65] Loading cluster: ha-845088
	I0729 01:11:24.956952   33078 notify.go:220] Checking for updates...
	I0729 01:11:24.957228   33078 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:11:24.957246   33078 status.go:255] checking status of ha-845088 ...
	I0729 01:11:24.957773   33078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:24.957835   33078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:24.976891   33078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36259
	I0729 01:11:24.977299   33078 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:24.977952   33078 main.go:141] libmachine: Using API Version  1
	I0729 01:11:24.977978   33078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:24.978329   33078 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:24.978530   33078 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:11:24.980345   33078 status.go:330] ha-845088 host status = "Running" (err=<nil>)
	I0729 01:11:24.980362   33078 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:11:24.980710   33078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:24.980755   33078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:24.995874   33078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0729 01:11:24.996319   33078 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:24.996813   33078 main.go:141] libmachine: Using API Version  1
	I0729 01:11:24.996833   33078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:24.997144   33078 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:24.997338   33078 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:11:25.000336   33078 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:25.000760   33078 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:11:25.000788   33078 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:25.000924   33078 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:11:25.001209   33078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:25.001243   33078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:25.018213   33078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44757
	I0729 01:11:25.018600   33078 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:25.019123   33078 main.go:141] libmachine: Using API Version  1
	I0729 01:11:25.019147   33078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:25.019443   33078 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:25.019610   33078 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:11:25.019848   33078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:25.019873   33078 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:11:25.022902   33078 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:25.023460   33078 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:11:25.023779   33078 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:25.023836   33078 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:11:25.024017   33078 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:11:25.024163   33078 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:11:25.024300   33078 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:11:25.103278   33078 ssh_runner.go:195] Run: systemctl --version
	I0729 01:11:25.109764   33078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:11:25.125610   33078 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:11:25.125644   33078 api_server.go:166] Checking apiserver status ...
	I0729 01:11:25.125689   33078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:11:25.146488   33078 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0729 01:11:25.158191   33078 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:11:25.158237   33078 ssh_runner.go:195] Run: ls
	I0729 01:11:25.163313   33078 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:11:25.167889   33078 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:11:25.167914   33078 status.go:422] ha-845088 apiserver status = Running (err=<nil>)
	I0729 01:11:25.167927   33078 status.go:257] ha-845088 status: &{Name:ha-845088 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:11:25.167950   33078 status.go:255] checking status of ha-845088-m02 ...
	I0729 01:11:25.168294   33078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:25.168339   33078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:25.183624   33078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41951
	I0729 01:11:25.184048   33078 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:25.184527   33078 main.go:141] libmachine: Using API Version  1
	I0729 01:11:25.184552   33078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:25.184921   33078 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:25.185104   33078 main.go:141] libmachine: (ha-845088-m02) Calling .GetState
	I0729 01:11:25.186845   33078 status.go:330] ha-845088-m02 host status = "Stopped" (err=<nil>)
	I0729 01:11:25.186859   33078 status.go:343] host is not running, skipping remaining checks
	I0729 01:11:25.186866   33078 status.go:257] ha-845088-m02 status: &{Name:ha-845088-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:11:25.186898   33078 status.go:255] checking status of ha-845088-m03 ...
	I0729 01:11:25.187215   33078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:25.187267   33078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:25.201755   33078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41587
	I0729 01:11:25.202249   33078 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:25.202692   33078 main.go:141] libmachine: Using API Version  1
	I0729 01:11:25.202710   33078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:25.203029   33078 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:25.203225   33078 main.go:141] libmachine: (ha-845088-m03) Calling .GetState
	I0729 01:11:25.204716   33078 status.go:330] ha-845088-m03 host status = "Running" (err=<nil>)
	I0729 01:11:25.204741   33078 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:11:25.205156   33078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:25.205190   33078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:25.219624   33078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35901
	I0729 01:11:25.220020   33078 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:25.220419   33078 main.go:141] libmachine: Using API Version  1
	I0729 01:11:25.220435   33078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:25.220760   33078 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:25.220937   33078 main.go:141] libmachine: (ha-845088-m03) Calling .GetIP
	I0729 01:11:25.223878   33078 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:25.224309   33078 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:11:25.224335   33078 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:25.224532   33078 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:11:25.224825   33078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:25.224862   33078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:25.239052   33078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45655
	I0729 01:11:25.239450   33078 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:25.239937   33078 main.go:141] libmachine: Using API Version  1
	I0729 01:11:25.239957   33078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:25.240260   33078 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:25.240451   33078 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:11:25.240686   33078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:25.240710   33078 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:11:25.243679   33078 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:25.244176   33078 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:11:25.244201   33078 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:25.244390   33078 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:11:25.244553   33078 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:11:25.244703   33078 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:11:25.244826   33078 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:11:25.335842   33078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:11:25.354981   33078 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:11:25.355011   33078 api_server.go:166] Checking apiserver status ...
	I0729 01:11:25.355087   33078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:11:25.371050   33078 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup
	W0729 01:11:25.380801   33078 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:11:25.380887   33078 ssh_runner.go:195] Run: ls
	I0729 01:11:25.385237   33078 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:11:25.389380   33078 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:11:25.389399   33078 status.go:422] ha-845088-m03 apiserver status = Running (err=<nil>)
	I0729 01:11:25.389407   33078 status.go:257] ha-845088-m03 status: &{Name:ha-845088-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:11:25.389426   33078 status.go:255] checking status of ha-845088-m04 ...
	I0729 01:11:25.389711   33078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:25.389744   33078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:25.405403   33078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35219
	I0729 01:11:25.405816   33078 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:25.406339   33078 main.go:141] libmachine: Using API Version  1
	I0729 01:11:25.406360   33078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:25.406661   33078 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:25.406857   33078 main.go:141] libmachine: (ha-845088-m04) Calling .GetState
	I0729 01:11:25.408221   33078 status.go:330] ha-845088-m04 host status = "Running" (err=<nil>)
	I0729 01:11:25.408237   33078 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:11:25.408505   33078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:25.408545   33078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:25.422543   33078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33163
	I0729 01:11:25.422918   33078 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:25.423334   33078 main.go:141] libmachine: Using API Version  1
	I0729 01:11:25.423353   33078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:25.423675   33078 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:25.423857   33078 main.go:141] libmachine: (ha-845088-m04) Calling .GetIP
	I0729 01:11:25.426673   33078 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:25.427180   33078 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:11:25.427204   33078 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:25.427385   33078 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:11:25.427654   33078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:25.427705   33078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:25.442583   33078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0729 01:11:25.442962   33078 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:25.443393   33078 main.go:141] libmachine: Using API Version  1
	I0729 01:11:25.443411   33078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:25.443720   33078 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:25.443888   33078 main.go:141] libmachine: (ha-845088-m04) Calling .DriverName
	I0729 01:11:25.444061   33078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:25.444079   33078 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHHostname
	I0729 01:11:25.446957   33078 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:25.447403   33078 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:11:25.447427   33078 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:25.447591   33078 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHPort
	I0729 01:11:25.447769   33078 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHKeyPath
	I0729 01:11:25.447941   33078 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHUsername
	I0729 01:11:25.448107   33078 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m04/id_rsa Username:docker}
	I0729 01:11:25.530223   33078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:11:25.545101   33078 status.go:257] ha-845088-m04 status: &{Name:ha-845088-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0729 01:11:27.215204   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr: exit status 7 (613.008113ms)

                                                
                                                
-- stdout --
	ha-845088
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-845088-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:11:35.908868   33182 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:11:35.909111   33182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:11:35.909120   33182 out.go:304] Setting ErrFile to fd 2...
	I0729 01:11:35.909124   33182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:11:35.909306   33182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:11:35.909445   33182 out.go:298] Setting JSON to false
	I0729 01:11:35.909469   33182 mustload.go:65] Loading cluster: ha-845088
	I0729 01:11:35.909506   33182 notify.go:220] Checking for updates...
	I0729 01:11:35.909788   33182 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:11:35.909803   33182 status.go:255] checking status of ha-845088 ...
	I0729 01:11:35.910158   33182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:35.910200   33182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:35.930298   33182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37945
	I0729 01:11:35.930711   33182 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:35.931248   33182 main.go:141] libmachine: Using API Version  1
	I0729 01:11:35.931270   33182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:35.931606   33182 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:35.931772   33182 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:11:35.933441   33182 status.go:330] ha-845088 host status = "Running" (err=<nil>)
	I0729 01:11:35.933458   33182 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:11:35.933766   33182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:35.933814   33182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:35.948580   33182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33503
	I0729 01:11:35.949027   33182 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:35.949552   33182 main.go:141] libmachine: Using API Version  1
	I0729 01:11:35.949574   33182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:35.949881   33182 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:35.950052   33182 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:11:35.953040   33182 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:35.953432   33182 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:11:35.953459   33182 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:35.953596   33182 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:11:35.953884   33182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:35.953926   33182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:35.968379   33182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46091
	I0729 01:11:35.968751   33182 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:35.969199   33182 main.go:141] libmachine: Using API Version  1
	I0729 01:11:35.969225   33182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:35.969547   33182 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:35.969724   33182 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:11:35.969950   33182 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:35.969983   33182 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:11:35.972476   33182 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:35.972868   33182 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:11:35.972901   33182 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:11:35.972991   33182 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:11:35.973160   33182 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:11:35.973311   33182 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:11:35.973428   33182 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:11:36.054443   33182 ssh_runner.go:195] Run: systemctl --version
	I0729 01:11:36.060319   33182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:11:36.077395   33182 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:11:36.077421   33182 api_server.go:166] Checking apiserver status ...
	I0729 01:11:36.077452   33182 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:11:36.090932   33182 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0729 01:11:36.099464   33182 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:11:36.099535   33182 ssh_runner.go:195] Run: ls
	I0729 01:11:36.103682   33182 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:11:36.109645   33182 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:11:36.109665   33182 status.go:422] ha-845088 apiserver status = Running (err=<nil>)
	I0729 01:11:36.109675   33182 status.go:257] ha-845088 status: &{Name:ha-845088 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:11:36.109696   33182 status.go:255] checking status of ha-845088-m02 ...
	I0729 01:11:36.110104   33182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:36.110139   33182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:36.124551   33182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I0729 01:11:36.124915   33182 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:36.125355   33182 main.go:141] libmachine: Using API Version  1
	I0729 01:11:36.125381   33182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:36.125662   33182 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:36.125836   33182 main.go:141] libmachine: (ha-845088-m02) Calling .GetState
	I0729 01:11:36.127342   33182 status.go:330] ha-845088-m02 host status = "Stopped" (err=<nil>)
	I0729 01:11:36.127355   33182 status.go:343] host is not running, skipping remaining checks
	I0729 01:11:36.127360   33182 status.go:257] ha-845088-m02 status: &{Name:ha-845088-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:11:36.127374   33182 status.go:255] checking status of ha-845088-m03 ...
	I0729 01:11:36.127632   33182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:36.127688   33182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:36.142394   33182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46289
	I0729 01:11:36.142800   33182 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:36.143316   33182 main.go:141] libmachine: Using API Version  1
	I0729 01:11:36.143337   33182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:36.143645   33182 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:36.143864   33182 main.go:141] libmachine: (ha-845088-m03) Calling .GetState
	I0729 01:11:36.145508   33182 status.go:330] ha-845088-m03 host status = "Running" (err=<nil>)
	I0729 01:11:36.145522   33182 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:11:36.145922   33182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:36.145971   33182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:36.159781   33182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37375
	I0729 01:11:36.160177   33182 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:36.160652   33182 main.go:141] libmachine: Using API Version  1
	I0729 01:11:36.160669   33182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:36.160943   33182 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:36.161110   33182 main.go:141] libmachine: (ha-845088-m03) Calling .GetIP
	I0729 01:11:36.163886   33182 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:36.164267   33182 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:11:36.164289   33182 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:36.164491   33182 host.go:66] Checking if "ha-845088-m03" exists ...
	I0729 01:11:36.164814   33182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:36.164859   33182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:36.179250   33182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39983
	I0729 01:11:36.179636   33182 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:36.180085   33182 main.go:141] libmachine: Using API Version  1
	I0729 01:11:36.180102   33182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:36.180403   33182 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:36.180573   33182 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:11:36.180753   33182 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:36.180781   33182 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:11:36.183993   33182 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:36.184398   33182 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:11:36.184421   33182 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:36.184540   33182 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:11:36.184724   33182 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:11:36.184863   33182 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:11:36.185039   33182 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:11:36.275922   33182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:11:36.292418   33182 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:11:36.292445   33182 api_server.go:166] Checking apiserver status ...
	I0729 01:11:36.292475   33182 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:11:36.306236   33182 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup
	W0729 01:11:36.316173   33182 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:11:36.316233   33182 ssh_runner.go:195] Run: ls
	I0729 01:11:36.320848   33182 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:11:36.324940   33182 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:11:36.324962   33182 status.go:422] ha-845088-m03 apiserver status = Running (err=<nil>)
	I0729 01:11:36.324970   33182 status.go:257] ha-845088-m03 status: &{Name:ha-845088-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:11:36.324985   33182 status.go:255] checking status of ha-845088-m04 ...
	I0729 01:11:36.325259   33182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:36.325288   33182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:36.339749   33182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33193
	I0729 01:11:36.340212   33182 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:36.340671   33182 main.go:141] libmachine: Using API Version  1
	I0729 01:11:36.340692   33182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:36.340982   33182 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:36.341151   33182 main.go:141] libmachine: (ha-845088-m04) Calling .GetState
	I0729 01:11:36.342477   33182 status.go:330] ha-845088-m04 host status = "Running" (err=<nil>)
	I0729 01:11:36.342490   33182 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:11:36.342749   33182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:36.342782   33182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:36.357653   33182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34969
	I0729 01:11:36.358246   33182 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:36.358834   33182 main.go:141] libmachine: Using API Version  1
	I0729 01:11:36.358862   33182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:36.359247   33182 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:36.359423   33182 main.go:141] libmachine: (ha-845088-m04) Calling .GetIP
	I0729 01:11:36.362486   33182 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:36.362942   33182 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:11:36.362968   33182 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:36.363161   33182 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:11:36.363556   33182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:36.363600   33182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:36.378390   33182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46617
	I0729 01:11:36.378770   33182 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:36.379348   33182 main.go:141] libmachine: Using API Version  1
	I0729 01:11:36.379373   33182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:36.379746   33182 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:36.379942   33182 main.go:141] libmachine: (ha-845088-m04) Calling .DriverName
	I0729 01:11:36.380123   33182 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:11:36.380145   33182 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHHostname
	I0729 01:11:36.382524   33182 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:36.382829   33182 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:11:36.382858   33182 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:36.383071   33182 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHPort
	I0729 01:11:36.383220   33182 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHKeyPath
	I0729 01:11:36.383326   33182 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHUsername
	I0729 01:11:36.383506   33182 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m04/id_rsa Username:docker}
	I0729 01:11:36.466430   33182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:11:36.481511   33182 status.go:257] ha-845088-m04 status: &{Name:ha-845088-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-845088 -n ha-845088
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-845088 logs -n 25: (1.399110506s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m03:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088:/home/docker/cp-test_ha-845088-m03_ha-845088.txt                       |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088 sudo cat                                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m03_ha-845088.txt                                 |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m03:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m02:/home/docker/cp-test_ha-845088-m03_ha-845088-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088-m02 sudo cat                                          | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m03_ha-845088-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m03:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04:/home/docker/cp-test_ha-845088-m03_ha-845088-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088-m04 sudo cat                                          | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m03_ha-845088-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-845088 cp testdata/cp-test.txt                                                | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2637143725/001/cp-test_ha-845088-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088:/home/docker/cp-test_ha-845088-m04_ha-845088.txt                       |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088 sudo cat                                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m04_ha-845088.txt                                 |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m02:/home/docker/cp-test_ha-845088-m04_ha-845088-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088-m02 sudo cat                                          | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m04_ha-845088-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m03:/home/docker/cp-test_ha-845088-m04_ha-845088-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088-m03 sudo cat                                          | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m04_ha-845088-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-845088 node stop m02 -v=7                                                     | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-845088 node start m02 -v=7                                                    | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 01:03:12
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 01:03:12.121877   27502 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:03:12.122154   27502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:03:12.122164   27502 out.go:304] Setting ErrFile to fd 2...
	I0729 01:03:12.122168   27502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:03:12.122348   27502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:03:12.122892   27502 out.go:298] Setting JSON to false
	I0729 01:03:12.123711   27502 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2738,"bootTime":1722212254,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 01:03:12.123766   27502 start.go:139] virtualization: kvm guest
	I0729 01:03:12.126179   27502 out.go:177] * [ha-845088] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 01:03:12.127700   27502 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 01:03:12.127697   27502 notify.go:220] Checking for updates...
	I0729 01:03:12.130313   27502 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 01:03:12.131713   27502 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:03:12.133085   27502 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:03:12.134411   27502 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 01:03:12.135783   27502 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 01:03:12.137175   27502 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 01:03:12.172209   27502 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 01:03:12.173552   27502 start.go:297] selected driver: kvm2
	I0729 01:03:12.173562   27502 start.go:901] validating driver "kvm2" against <nil>
	I0729 01:03:12.173572   27502 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 01:03:12.174224   27502 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:03:12.174292   27502 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 01:03:12.189041   27502 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 01:03:12.189114   27502 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 01:03:12.189323   27502 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 01:03:12.189349   27502 cni.go:84] Creating CNI manager for ""
	I0729 01:03:12.189355   27502 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 01:03:12.189360   27502 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 01:03:12.189418   27502 start.go:340] cluster config:
	{Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0729 01:03:12.189503   27502 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:03:12.191160   27502 out.go:177] * Starting "ha-845088" primary control-plane node in "ha-845088" cluster
	I0729 01:03:12.192391   27502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:03:12.192425   27502 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 01:03:12.192436   27502 cache.go:56] Caching tarball of preloaded images
	I0729 01:03:12.192516   27502 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 01:03:12.192529   27502 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 01:03:12.192821   27502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:03:12.192841   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json: {Name:mkf0b69659feb56f46b54c3a61f0315d19af49eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:03:12.192976   27502 start.go:360] acquireMachinesLock for ha-845088: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 01:03:12.193009   27502 start.go:364] duration metric: took 17.052µs to acquireMachinesLock for "ha-845088"
	I0729 01:03:12.193030   27502 start.go:93] Provisioning new machine with config: &{Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:03:12.193098   27502 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 01:03:12.194890   27502 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 01:03:12.195002   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:03:12.195037   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:03:12.208952   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36375
	I0729 01:03:12.209335   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:03:12.209831   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:03:12.209846   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:03:12.210186   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:03:12.210362   27502 main.go:141] libmachine: (ha-845088) Calling .GetMachineName
	I0729 01:03:12.210532   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:03:12.210704   27502 start.go:159] libmachine.API.Create for "ha-845088" (driver="kvm2")
	I0729 01:03:12.210730   27502 client.go:168] LocalClient.Create starting
	I0729 01:03:12.210754   27502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem
	I0729 01:03:12.210787   27502 main.go:141] libmachine: Decoding PEM data...
	I0729 01:03:12.210800   27502 main.go:141] libmachine: Parsing certificate...
	I0729 01:03:12.210853   27502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem
	I0729 01:03:12.210871   27502 main.go:141] libmachine: Decoding PEM data...
	I0729 01:03:12.210884   27502 main.go:141] libmachine: Parsing certificate...
	I0729 01:03:12.210900   27502 main.go:141] libmachine: Running pre-create checks...
	I0729 01:03:12.210912   27502 main.go:141] libmachine: (ha-845088) Calling .PreCreateCheck
	I0729 01:03:12.211247   27502 main.go:141] libmachine: (ha-845088) Calling .GetConfigRaw
	I0729 01:03:12.211598   27502 main.go:141] libmachine: Creating machine...
	I0729 01:03:12.211612   27502 main.go:141] libmachine: (ha-845088) Calling .Create
	I0729 01:03:12.211746   27502 main.go:141] libmachine: (ha-845088) Creating KVM machine...
	I0729 01:03:12.213004   27502 main.go:141] libmachine: (ha-845088) DBG | found existing default KVM network
	I0729 01:03:12.213700   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:12.213583   27525 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0729 01:03:12.213717   27502 main.go:141] libmachine: (ha-845088) DBG | created network xml: 
	I0729 01:03:12.213728   27502 main.go:141] libmachine: (ha-845088) DBG | <network>
	I0729 01:03:12.213741   27502 main.go:141] libmachine: (ha-845088) DBG |   <name>mk-ha-845088</name>
	I0729 01:03:12.213750   27502 main.go:141] libmachine: (ha-845088) DBG |   <dns enable='no'/>
	I0729 01:03:12.213757   27502 main.go:141] libmachine: (ha-845088) DBG |   
	I0729 01:03:12.213768   27502 main.go:141] libmachine: (ha-845088) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 01:03:12.213779   27502 main.go:141] libmachine: (ha-845088) DBG |     <dhcp>
	I0729 01:03:12.213786   27502 main.go:141] libmachine: (ha-845088) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 01:03:12.213792   27502 main.go:141] libmachine: (ha-845088) DBG |     </dhcp>
	I0729 01:03:12.213806   27502 main.go:141] libmachine: (ha-845088) DBG |   </ip>
	I0729 01:03:12.213819   27502 main.go:141] libmachine: (ha-845088) DBG |   
	I0729 01:03:12.213831   27502 main.go:141] libmachine: (ha-845088) DBG | </network>
	I0729 01:03:12.213845   27502 main.go:141] libmachine: (ha-845088) DBG | 
	I0729 01:03:12.218774   27502 main.go:141] libmachine: (ha-845088) DBG | trying to create private KVM network mk-ha-845088 192.168.39.0/24...
	I0729 01:03:12.283925   27502 main.go:141] libmachine: (ha-845088) DBG | private KVM network mk-ha-845088 192.168.39.0/24 created
	I0729 01:03:12.283965   27502 main.go:141] libmachine: (ha-845088) Setting up store path in /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088 ...
	I0729 01:03:12.283979   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:12.283913   27525 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:03:12.284066   27502 main.go:141] libmachine: (ha-845088) Building disk image from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 01:03:12.284085   27502 main.go:141] libmachine: (ha-845088) Downloading /home/jenkins/minikube-integration/19312-9421/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 01:03:12.517784   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:12.517610   27525 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa...
	I0729 01:03:12.638198   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:12.638078   27525 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/ha-845088.rawdisk...
	I0729 01:03:12.638239   27502 main.go:141] libmachine: (ha-845088) DBG | Writing magic tar header
	I0729 01:03:12.638254   27502 main.go:141] libmachine: (ha-845088) DBG | Writing SSH key tar header
	I0729 01:03:12.638303   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:12.638214   27525 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088 ...
	I0729 01:03:12.638351   27502 main.go:141] libmachine: (ha-845088) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088
	I0729 01:03:12.638379   27502 main.go:141] libmachine: (ha-845088) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines
	I0729 01:03:12.638391   27502 main.go:141] libmachine: (ha-845088) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088 (perms=drwx------)
	I0729 01:03:12.638405   27502 main.go:141] libmachine: (ha-845088) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines (perms=drwxr-xr-x)
	I0729 01:03:12.638415   27502 main.go:141] libmachine: (ha-845088) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube (perms=drwxr-xr-x)
	I0729 01:03:12.638429   27502 main.go:141] libmachine: (ha-845088) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421 (perms=drwxrwxr-x)
	I0729 01:03:12.638442   27502 main.go:141] libmachine: (ha-845088) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 01:03:12.638456   27502 main.go:141] libmachine: (ha-845088) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 01:03:12.638481   27502 main.go:141] libmachine: (ha-845088) Creating domain...
	I0729 01:03:12.638494   27502 main.go:141] libmachine: (ha-845088) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:03:12.638509   27502 main.go:141] libmachine: (ha-845088) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421
	I0729 01:03:12.638522   27502 main.go:141] libmachine: (ha-845088) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 01:03:12.638536   27502 main.go:141] libmachine: (ha-845088) DBG | Checking permissions on dir: /home/jenkins
	I0729 01:03:12.638552   27502 main.go:141] libmachine: (ha-845088) DBG | Checking permissions on dir: /home
	I0729 01:03:12.638567   27502 main.go:141] libmachine: (ha-845088) DBG | Skipping /home - not owner
	I0729 01:03:12.639556   27502 main.go:141] libmachine: (ha-845088) define libvirt domain using xml: 
	I0729 01:03:12.639580   27502 main.go:141] libmachine: (ha-845088) <domain type='kvm'>
	I0729 01:03:12.639590   27502 main.go:141] libmachine: (ha-845088)   <name>ha-845088</name>
	I0729 01:03:12.639600   27502 main.go:141] libmachine: (ha-845088)   <memory unit='MiB'>2200</memory>
	I0729 01:03:12.639629   27502 main.go:141] libmachine: (ha-845088)   <vcpu>2</vcpu>
	I0729 01:03:12.639658   27502 main.go:141] libmachine: (ha-845088)   <features>
	I0729 01:03:12.639671   27502 main.go:141] libmachine: (ha-845088)     <acpi/>
	I0729 01:03:12.639681   27502 main.go:141] libmachine: (ha-845088)     <apic/>
	I0729 01:03:12.639691   27502 main.go:141] libmachine: (ha-845088)     <pae/>
	I0729 01:03:12.639703   27502 main.go:141] libmachine: (ha-845088)     
	I0729 01:03:12.639714   27502 main.go:141] libmachine: (ha-845088)   </features>
	I0729 01:03:12.639726   27502 main.go:141] libmachine: (ha-845088)   <cpu mode='host-passthrough'>
	I0729 01:03:12.639745   27502 main.go:141] libmachine: (ha-845088)   
	I0729 01:03:12.639759   27502 main.go:141] libmachine: (ha-845088)   </cpu>
	I0729 01:03:12.639776   27502 main.go:141] libmachine: (ha-845088)   <os>
	I0729 01:03:12.639783   27502 main.go:141] libmachine: (ha-845088)     <type>hvm</type>
	I0729 01:03:12.639794   27502 main.go:141] libmachine: (ha-845088)     <boot dev='cdrom'/>
	I0729 01:03:12.639801   27502 main.go:141] libmachine: (ha-845088)     <boot dev='hd'/>
	I0729 01:03:12.639807   27502 main.go:141] libmachine: (ha-845088)     <bootmenu enable='no'/>
	I0729 01:03:12.639813   27502 main.go:141] libmachine: (ha-845088)   </os>
	I0729 01:03:12.639818   27502 main.go:141] libmachine: (ha-845088)   <devices>
	I0729 01:03:12.639825   27502 main.go:141] libmachine: (ha-845088)     <disk type='file' device='cdrom'>
	I0729 01:03:12.639833   27502 main.go:141] libmachine: (ha-845088)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/boot2docker.iso'/>
	I0729 01:03:12.639840   27502 main.go:141] libmachine: (ha-845088)       <target dev='hdc' bus='scsi'/>
	I0729 01:03:12.639845   27502 main.go:141] libmachine: (ha-845088)       <readonly/>
	I0729 01:03:12.639851   27502 main.go:141] libmachine: (ha-845088)     </disk>
	I0729 01:03:12.639857   27502 main.go:141] libmachine: (ha-845088)     <disk type='file' device='disk'>
	I0729 01:03:12.639865   27502 main.go:141] libmachine: (ha-845088)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 01:03:12.639872   27502 main.go:141] libmachine: (ha-845088)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/ha-845088.rawdisk'/>
	I0729 01:03:12.639879   27502 main.go:141] libmachine: (ha-845088)       <target dev='hda' bus='virtio'/>
	I0729 01:03:12.639884   27502 main.go:141] libmachine: (ha-845088)     </disk>
	I0729 01:03:12.639891   27502 main.go:141] libmachine: (ha-845088)     <interface type='network'>
	I0729 01:03:12.639908   27502 main.go:141] libmachine: (ha-845088)       <source network='mk-ha-845088'/>
	I0729 01:03:12.639924   27502 main.go:141] libmachine: (ha-845088)       <model type='virtio'/>
	I0729 01:03:12.639938   27502 main.go:141] libmachine: (ha-845088)     </interface>
	I0729 01:03:12.639950   27502 main.go:141] libmachine: (ha-845088)     <interface type='network'>
	I0729 01:03:12.639975   27502 main.go:141] libmachine: (ha-845088)       <source network='default'/>
	I0729 01:03:12.639986   27502 main.go:141] libmachine: (ha-845088)       <model type='virtio'/>
	I0729 01:03:12.639998   27502 main.go:141] libmachine: (ha-845088)     </interface>
	I0729 01:03:12.640013   27502 main.go:141] libmachine: (ha-845088)     <serial type='pty'>
	I0729 01:03:12.640025   27502 main.go:141] libmachine: (ha-845088)       <target port='0'/>
	I0729 01:03:12.640034   27502 main.go:141] libmachine: (ha-845088)     </serial>
	I0729 01:03:12.640042   27502 main.go:141] libmachine: (ha-845088)     <console type='pty'>
	I0729 01:03:12.640051   27502 main.go:141] libmachine: (ha-845088)       <target type='serial' port='0'/>
	I0729 01:03:12.640063   27502 main.go:141] libmachine: (ha-845088)     </console>
	I0729 01:03:12.640073   27502 main.go:141] libmachine: (ha-845088)     <rng model='virtio'>
	I0729 01:03:12.640085   27502 main.go:141] libmachine: (ha-845088)       <backend model='random'>/dev/random</backend>
	I0729 01:03:12.640106   27502 main.go:141] libmachine: (ha-845088)     </rng>
	I0729 01:03:12.640116   27502 main.go:141] libmachine: (ha-845088)     
	I0729 01:03:12.640123   27502 main.go:141] libmachine: (ha-845088)     
	I0729 01:03:12.640135   27502 main.go:141] libmachine: (ha-845088)   </devices>
	I0729 01:03:12.640144   27502 main.go:141] libmachine: (ha-845088) </domain>
	I0729 01:03:12.640158   27502 main.go:141] libmachine: (ha-845088) 
	I0729 01:03:12.644333   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:ad:7c:e6 in network default
	I0729 01:03:12.644849   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:12.644881   27502 main.go:141] libmachine: (ha-845088) Ensuring networks are active...
	I0729 01:03:12.645555   27502 main.go:141] libmachine: (ha-845088) Ensuring network default is active
	I0729 01:03:12.645997   27502 main.go:141] libmachine: (ha-845088) Ensuring network mk-ha-845088 is active
	I0729 01:03:12.646730   27502 main.go:141] libmachine: (ha-845088) Getting domain xml...
	I0729 01:03:12.647542   27502 main.go:141] libmachine: (ha-845088) Creating domain...
	I0729 01:03:13.820993   27502 main.go:141] libmachine: (ha-845088) Waiting to get IP...
	I0729 01:03:13.821909   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:13.822249   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:13.822301   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:13.822244   27525 retry.go:31] will retry after 205.352697ms: waiting for machine to come up
	I0729 01:03:14.029845   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:14.030257   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:14.030278   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:14.030223   27525 retry.go:31] will retry after 381.277024ms: waiting for machine to come up
	I0729 01:03:14.412699   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:14.413153   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:14.413174   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:14.413118   27525 retry.go:31] will retry after 305.705256ms: waiting for machine to come up
	I0729 01:03:14.720560   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:14.721032   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:14.721060   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:14.720984   27525 retry.go:31] will retry after 500.779269ms: waiting for machine to come up
	I0729 01:03:15.223870   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:15.224247   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:15.224273   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:15.224207   27525 retry.go:31] will retry after 590.26977ms: waiting for machine to come up
	I0729 01:03:15.815920   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:15.816426   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:15.816455   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:15.816358   27525 retry.go:31] will retry after 629.065185ms: waiting for machine to come up
	I0729 01:03:16.446722   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:16.447120   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:16.447262   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:16.447079   27525 retry.go:31] will retry after 1.124983475s: waiting for machine to come up
	I0729 01:03:17.575308   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:17.575769   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:17.575795   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:17.575726   27525 retry.go:31] will retry after 1.148377221s: waiting for machine to come up
	I0729 01:03:18.726112   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:18.726642   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:18.726669   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:18.726593   27525 retry.go:31] will retry after 1.423289352s: waiting for machine to come up
	I0729 01:03:20.152088   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:20.152694   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:20.152722   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:20.152660   27525 retry.go:31] will retry after 1.626608206s: waiting for machine to come up
	I0729 01:03:21.780646   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:21.781164   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:21.781192   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:21.781112   27525 retry.go:31] will retry after 2.526440066s: waiting for machine to come up
	I0729 01:03:24.308850   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:24.309278   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:24.309301   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:24.309206   27525 retry.go:31] will retry after 3.090555813s: waiting for machine to come up
	I0729 01:03:27.400891   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:27.401316   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:27.401339   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:27.401277   27525 retry.go:31] will retry after 4.468642103s: waiting for machine to come up
	I0729 01:03:31.874856   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:31.875259   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find current IP address of domain ha-845088 in network mk-ha-845088
	I0729 01:03:31.875283   27502 main.go:141] libmachine: (ha-845088) DBG | I0729 01:03:31.875211   27525 retry.go:31] will retry after 5.199836841s: waiting for machine to come up
	I0729 01:03:37.080567   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.080957   27502 main.go:141] libmachine: (ha-845088) Found IP for machine: 192.168.39.69
	I0729 01:03:37.080988   27502 main.go:141] libmachine: (ha-845088) Reserving static IP address...
	I0729 01:03:37.081001   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has current primary IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.081366   27502 main.go:141] libmachine: (ha-845088) DBG | unable to find host DHCP lease matching {name: "ha-845088", mac: "52:54:00:9a:b1:bc", ip: "192.168.39.69"} in network mk-ha-845088
	I0729 01:03:37.152760   27502 main.go:141] libmachine: (ha-845088) DBG | Getting to WaitForSSH function...
	I0729 01:03:37.152790   27502 main.go:141] libmachine: (ha-845088) Reserved static IP address: 192.168.39.69
	I0729 01:03:37.152804   27502 main.go:141] libmachine: (ha-845088) Waiting for SSH to be available...
	I0729 01:03:37.155421   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.155801   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:37.155825   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.156015   27502 main.go:141] libmachine: (ha-845088) DBG | Using SSH client type: external
	I0729 01:03:37.156037   27502 main.go:141] libmachine: (ha-845088) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa (-rw-------)
	I0729 01:03:37.156119   27502 main.go:141] libmachine: (ha-845088) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 01:03:37.156136   27502 main.go:141] libmachine: (ha-845088) DBG | About to run SSH command:
	I0729 01:03:37.156148   27502 main.go:141] libmachine: (ha-845088) DBG | exit 0
	I0729 01:03:37.278974   27502 main.go:141] libmachine: (ha-845088) DBG | SSH cmd err, output: <nil>: 
	I0729 01:03:37.279326   27502 main.go:141] libmachine: (ha-845088) KVM machine creation complete!
	I0729 01:03:37.279654   27502 main.go:141] libmachine: (ha-845088) Calling .GetConfigRaw
	I0729 01:03:37.280204   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:03:37.280393   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:03:37.280580   27502 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 01:03:37.280597   27502 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:03:37.281805   27502 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 01:03:37.281821   27502 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 01:03:37.281826   27502 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 01:03:37.281831   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:37.284074   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.284468   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:37.284494   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.284678   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:37.284825   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.284934   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.285053   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:37.285229   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:03:37.285454   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:03:37.285473   27502 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 01:03:37.386635   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:03:37.386660   27502 main.go:141] libmachine: Detecting the provisioner...
	I0729 01:03:37.386668   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:37.389325   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.389644   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:37.389663   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.389832   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:37.390004   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.390166   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.390287   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:37.390513   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:03:37.390713   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:03:37.390728   27502 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 01:03:37.491706   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 01:03:37.491792   27502 main.go:141] libmachine: found compatible host: buildroot
	I0729 01:03:37.491804   27502 main.go:141] libmachine: Provisioning with buildroot...
	I0729 01:03:37.491812   27502 main.go:141] libmachine: (ha-845088) Calling .GetMachineName
	I0729 01:03:37.492053   27502 buildroot.go:166] provisioning hostname "ha-845088"
	I0729 01:03:37.492077   27502 main.go:141] libmachine: (ha-845088) Calling .GetMachineName
	I0729 01:03:37.492254   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:37.494745   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.495168   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:37.495192   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.495410   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:37.495587   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.495739   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.495861   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:37.496029   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:03:37.496232   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:03:37.496250   27502 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-845088 && echo "ha-845088" | sudo tee /etc/hostname
	I0729 01:03:37.618225   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-845088
	
	I0729 01:03:37.618266   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:37.620877   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.621184   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:37.621210   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.621397   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:37.621568   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.621723   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.621844   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:37.621992   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:03:37.622172   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:03:37.622194   27502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-845088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-845088/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-845088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 01:03:37.733775   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:03:37.733819   27502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 01:03:37.733859   27502 buildroot.go:174] setting up certificates
	I0729 01:03:37.733872   27502 provision.go:84] configureAuth start
	I0729 01:03:37.733884   27502 main.go:141] libmachine: (ha-845088) Calling .GetMachineName
	I0729 01:03:37.734131   27502 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:03:37.736925   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.737265   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:37.737290   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.737477   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:37.739915   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.740277   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:37.740301   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.740442   27502 provision.go:143] copyHostCerts
	I0729 01:03:37.740471   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:03:37.740510   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 01:03:37.740536   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:03:37.740620   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 01:03:37.740719   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:03:37.740738   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 01:03:37.740745   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:03:37.740773   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 01:03:37.740865   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:03:37.740884   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 01:03:37.740891   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:03:37.740913   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 01:03:37.740979   27502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.ha-845088 san=[127.0.0.1 192.168.39.69 ha-845088 localhost minikube]
	I0729 01:03:37.994395   27502 provision.go:177] copyRemoteCerts
	I0729 01:03:37.994454   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 01:03:37.994474   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:37.997273   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.997552   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:37.997579   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:37.997745   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:37.997931   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:37.998079   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:37.998329   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:03:38.077580   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 01:03:38.077663   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 01:03:38.101819   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 01:03:38.101886   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 01:03:38.125528   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 01:03:38.125601   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 01:03:38.152494   27502 provision.go:87] duration metric: took 418.607353ms to configureAuth
	I0729 01:03:38.152529   27502 buildroot.go:189] setting minikube options for container-runtime
	I0729 01:03:38.152846   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:03:38.152970   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:38.155443   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.155899   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:38.155927   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.156064   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:38.156257   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:38.156434   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:38.156561   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:38.156695   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:03:38.156884   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:03:38.156902   27502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 01:03:38.415551   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 01:03:38.415592   27502 main.go:141] libmachine: Checking connection to Docker...
	I0729 01:03:38.415605   27502 main.go:141] libmachine: (ha-845088) Calling .GetURL
	I0729 01:03:38.416978   27502 main.go:141] libmachine: (ha-845088) DBG | Using libvirt version 6000000
	I0729 01:03:38.419133   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.419491   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:38.419520   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.419668   27502 main.go:141] libmachine: Docker is up and running!
	I0729 01:03:38.419680   27502 main.go:141] libmachine: Reticulating splines...
	I0729 01:03:38.419688   27502 client.go:171] duration metric: took 26.20895079s to LocalClient.Create
	I0729 01:03:38.419712   27502 start.go:167] duration metric: took 26.209010013s to libmachine.API.Create "ha-845088"
	I0729 01:03:38.419725   27502 start.go:293] postStartSetup for "ha-845088" (driver="kvm2")
	I0729 01:03:38.419739   27502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 01:03:38.419760   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:03:38.419968   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 01:03:38.419987   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:38.421740   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.422019   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:38.422047   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.422145   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:38.422372   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:38.422520   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:38.422734   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:03:38.505848   27502 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 01:03:38.510137   27502 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 01:03:38.510159   27502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 01:03:38.510215   27502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 01:03:38.510280   27502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 01:03:38.510289   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /etc/ssl/certs/166232.pem
	I0729 01:03:38.510370   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 01:03:38.519588   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:03:38.542496   27502 start.go:296] duration metric: took 122.758329ms for postStartSetup
	I0729 01:03:38.542538   27502 main.go:141] libmachine: (ha-845088) Calling .GetConfigRaw
	I0729 01:03:38.543090   27502 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:03:38.546090   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.546423   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:38.546446   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.546709   27502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:03:38.546880   27502 start.go:128] duration metric: took 26.353773114s to createHost
	I0729 01:03:38.546927   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:38.549434   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.549758   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:38.549780   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.549920   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:38.550087   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:38.550241   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:38.550360   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:38.550492   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:03:38.550654   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:03:38.550666   27502 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 01:03:38.651773   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722215018.631808760
	
	I0729 01:03:38.651793   27502 fix.go:216] guest clock: 1722215018.631808760
	I0729 01:03:38.651869   27502 fix.go:229] Guest: 2024-07-29 01:03:38.63180876 +0000 UTC Remote: 2024-07-29 01:03:38.546890712 +0000 UTC m=+26.463181015 (delta=84.918048ms)
	I0729 01:03:38.651965   27502 fix.go:200] guest clock delta is within tolerance: 84.918048ms
	I0729 01:03:38.651975   27502 start.go:83] releasing machines lock for "ha-845088", held for 26.458954029s
	I0729 01:03:38.652007   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:03:38.652291   27502 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:03:38.655227   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.655577   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:38.655603   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.655776   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:03:38.656397   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:03:38.656575   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:03:38.656649   27502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 01:03:38.656695   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:38.656827   27502 ssh_runner.go:195] Run: cat /version.json
	I0729 01:03:38.656854   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:03:38.659471   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.659499   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.659851   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:38.659886   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:38.659906   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.659923   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:38.659978   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:38.660047   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:03:38.660193   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:38.660284   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:03:38.660352   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:38.660412   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:03:38.660481   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:03:38.660537   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:03:38.763188   27502 ssh_runner.go:195] Run: systemctl --version
	I0729 01:03:38.769051   27502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 01:03:38.928651   27502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 01:03:38.934880   27502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 01:03:38.934938   27502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 01:03:38.951248   27502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 01:03:38.951269   27502 start.go:495] detecting cgroup driver to use...
	I0729 01:03:38.951322   27502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 01:03:38.966590   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 01:03:38.980253   27502 docker.go:217] disabling cri-docker service (if available) ...
	I0729 01:03:38.980300   27502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 01:03:38.993611   27502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 01:03:39.006971   27502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 01:03:39.115717   27502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 01:03:39.249891   27502 docker.go:233] disabling docker service ...
	I0729 01:03:39.249954   27502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 01:03:39.264041   27502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 01:03:39.277314   27502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 01:03:39.405886   27502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 01:03:39.513242   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 01:03:39.526652   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 01:03:39.544453   27502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 01:03:39.544506   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:03:39.554325   27502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 01:03:39.554375   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:03:39.564401   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:03:39.574340   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:03:39.584435   27502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 01:03:39.595028   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:03:39.605150   27502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:03:39.622334   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:03:39.632242   27502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 01:03:39.641458   27502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 01:03:39.641509   27502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 01:03:39.654339   27502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 01:03:39.663905   27502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:03:39.773045   27502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 01:03:39.919080   27502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 01:03:39.919152   27502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 01:03:39.923762   27502 start.go:563] Will wait 60s for crictl version
	I0729 01:03:39.923821   27502 ssh_runner.go:195] Run: which crictl
	I0729 01:03:39.927598   27502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 01:03:39.968591   27502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 01:03:39.968665   27502 ssh_runner.go:195] Run: crio --version
	I0729 01:03:39.996574   27502 ssh_runner.go:195] Run: crio --version
	I0729 01:03:40.026801   27502 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 01:03:40.027835   27502 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:03:40.030475   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:40.030944   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:03:40.030970   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:03:40.031236   27502 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 01:03:40.035284   27502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 01:03:40.048244   27502 kubeadm.go:883] updating cluster {Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 01:03:40.048358   27502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:03:40.048399   27502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:03:40.081350   27502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 01:03:40.081420   27502 ssh_runner.go:195] Run: which lz4
	I0729 01:03:40.085479   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0729 01:03:40.085576   27502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 01:03:40.089825   27502 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 01:03:40.089857   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 01:03:41.478198   27502 crio.go:462] duration metric: took 1.392656825s to copy over tarball
	I0729 01:03:41.478261   27502 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 01:03:43.576178   27502 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.097890941s)
	I0729 01:03:43.576205   27502 crio.go:469] duration metric: took 2.097983811s to extract the tarball
	I0729 01:03:43.576212   27502 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 01:03:43.613781   27502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:03:43.661358   27502 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 01:03:43.661380   27502 cache_images.go:84] Images are preloaded, skipping loading
	I0729 01:03:43.661388   27502 kubeadm.go:934] updating node { 192.168.39.69 8443 v1.30.3 crio true true} ...
	I0729 01:03:43.661491   27502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-845088 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 01:03:43.661570   27502 ssh_runner.go:195] Run: crio config
	I0729 01:03:43.707003   27502 cni.go:84] Creating CNI manager for ""
	I0729 01:03:43.707027   27502 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 01:03:43.707035   27502 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 01:03:43.707055   27502 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-845088 NodeName:ha-845088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 01:03:43.707253   27502 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-845088"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 01:03:43.707289   27502 kube-vip.go:115] generating kube-vip config ...
	I0729 01:03:43.707329   27502 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 01:03:43.724749   27502 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 01:03:43.724858   27502 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0729 01:03:43.724909   27502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 01:03:43.734386   27502 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 01:03:43.734438   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 01:03:43.743325   27502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0729 01:03:43.759839   27502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 01:03:43.776186   27502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0729 01:03:43.792929   27502 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0729 01:03:43.809209   27502 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 01:03:43.813190   27502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 01:03:43.825580   27502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:03:43.939758   27502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:03:43.956174   27502 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088 for IP: 192.168.39.69
	I0729 01:03:43.956193   27502 certs.go:194] generating shared ca certs ...
	I0729 01:03:43.956207   27502 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:03:43.956372   27502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 01:03:43.956429   27502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 01:03:43.956443   27502 certs.go:256] generating profile certs ...
	I0729 01:03:43.956507   27502 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key
	I0729 01:03:43.956525   27502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.crt with IP's: []
	I0729 01:03:44.224079   27502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.crt ...
	I0729 01:03:44.224108   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.crt: {Name:mkbb4d0179849c0921fee0deff743f9640d04c5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:03:44.224266   27502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key ...
	I0729 01:03:44.224277   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key: {Name:mk45884c5b38065ca1050aae4f24fc7278238f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:03:44.224355   27502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.c7fdf3a4
	I0729 01:03:44.224369   27502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.c7fdf3a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.254]
	I0729 01:03:44.428782   27502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.c7fdf3a4 ...
	I0729 01:03:44.428812   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.c7fdf3a4: {Name:mk76a6c23b190fdfad7f1063ffe365289899ef62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:03:44.428966   27502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.c7fdf3a4 ...
	I0729 01:03:44.428978   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.c7fdf3a4: {Name:mk95b0589efbe991df6cd9765c9a01073f882d82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:03:44.429050   27502 certs.go:381] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.c7fdf3a4 -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt
	I0729 01:03:44.429116   27502 certs.go:385] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.c7fdf3a4 -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key
	I0729 01:03:44.429164   27502 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key
	I0729 01:03:44.429178   27502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt with IP's: []
	I0729 01:03:44.483832   27502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt ...
	I0729 01:03:44.483859   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt: {Name:mk686bd0f2ed47a16e90530b62f805f556e01d5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:03:44.484000   27502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key ...
	I0729 01:03:44.484010   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key: {Name:mkf175ff6357a3a134578e52096b66d046e1dc3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:03:44.484073   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 01:03:44.484087   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 01:03:44.484100   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 01:03:44.484120   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 01:03:44.484135   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 01:03:44.484148   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 01:03:44.484157   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 01:03:44.484169   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 01:03:44.484219   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 01:03:44.484251   27502 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 01:03:44.484260   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 01:03:44.484278   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 01:03:44.484298   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 01:03:44.484322   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 01:03:44.484361   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:03:44.484385   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:03:44.484397   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem -> /usr/share/ca-certificates/16623.pem
	I0729 01:03:44.484408   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /usr/share/ca-certificates/166232.pem
	I0729 01:03:44.484938   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 01:03:44.511156   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 01:03:44.535259   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 01:03:44.560941   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 01:03:44.584571   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 01:03:44.610596   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 01:03:44.634708   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 01:03:44.659226   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 01:03:44.683196   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 01:03:44.705864   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 01:03:44.731688   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 01:03:44.754701   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 01:03:44.781252   27502 ssh_runner.go:195] Run: openssl version
	I0729 01:03:44.791545   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 01:03:44.807347   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:03:44.815496   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:03:44.815549   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:03:44.821732   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 01:03:44.832529   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 01:03:44.843275   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 01:03:44.847784   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 01:03:44.847831   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 01:03:44.853560   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 01:03:44.864183   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 01:03:44.874389   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 01:03:44.879085   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 01:03:44.879135   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 01:03:44.885177   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 01:03:44.895553   27502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 01:03:44.899597   27502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 01:03:44.899653   27502 kubeadm.go:392] StartCluster: {Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:03:44.899739   27502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 01:03:44.899798   27502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 01:03:44.940375   27502 cri.go:89] found id: ""
	I0729 01:03:44.940469   27502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 01:03:44.950263   27502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 01:03:44.964111   27502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 01:03:44.974217   27502 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 01:03:44.974234   27502 kubeadm.go:157] found existing configuration files:
	
	I0729 01:03:44.974284   27502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 01:03:44.984164   27502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 01:03:44.984230   27502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 01:03:44.994145   27502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 01:03:45.003670   27502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 01:03:45.003724   27502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 01:03:45.013528   27502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 01:03:45.023044   27502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 01:03:45.023124   27502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 01:03:45.032959   27502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 01:03:45.042034   27502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 01:03:45.042102   27502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 01:03:45.052601   27502 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 01:03:45.311557   27502 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 01:03:56.764039   27502 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 01:03:56.764102   27502 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 01:03:56.764202   27502 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 01:03:56.764305   27502 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 01:03:56.764412   27502 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 01:03:56.764477   27502 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 01:03:56.765979   27502 out.go:204]   - Generating certificates and keys ...
	I0729 01:03:56.766081   27502 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 01:03:56.766176   27502 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 01:03:56.766283   27502 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 01:03:56.766366   27502 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 01:03:56.766456   27502 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 01:03:56.766523   27502 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 01:03:56.766594   27502 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 01:03:56.766721   27502 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-845088 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	I0729 01:03:56.766802   27502 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 01:03:56.766951   27502 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-845088 localhost] and IPs [192.168.39.69 127.0.0.1 ::1]
	I0729 01:03:56.767044   27502 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 01:03:56.767158   27502 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 01:03:56.767209   27502 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 01:03:56.767276   27502 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 01:03:56.767332   27502 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 01:03:56.767380   27502 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 01:03:56.767427   27502 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 01:03:56.767483   27502 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 01:03:56.767528   27502 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 01:03:56.767593   27502 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 01:03:56.767650   27502 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 01:03:56.769063   27502 out.go:204]   - Booting up control plane ...
	I0729 01:03:56.769162   27502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 01:03:56.769250   27502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 01:03:56.769323   27502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 01:03:56.769428   27502 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 01:03:56.769527   27502 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 01:03:56.769562   27502 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 01:03:56.769683   27502 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 01:03:56.769754   27502 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 01:03:56.769812   27502 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002033865s
	I0729 01:03:56.769917   27502 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 01:03:56.770002   27502 kubeadm.go:310] [api-check] The API server is healthy after 5.773462821s
	I0729 01:03:56.770153   27502 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 01:03:56.770304   27502 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 01:03:56.770381   27502 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 01:03:56.770570   27502 kubeadm.go:310] [mark-control-plane] Marking the node ha-845088 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 01:03:56.770645   27502 kubeadm.go:310] [bootstrap-token] Using token: wba6wh.0wq67cx7p2t5liwh
	I0729 01:03:56.771907   27502 out.go:204]   - Configuring RBAC rules ...
	I0729 01:03:56.772013   27502 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 01:03:56.772128   27502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 01:03:56.772308   27502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 01:03:56.772435   27502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 01:03:56.772550   27502 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 01:03:56.772648   27502 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 01:03:56.772792   27502 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 01:03:56.772869   27502 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 01:03:56.772923   27502 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 01:03:56.772931   27502 kubeadm.go:310] 
	I0729 01:03:56.772993   27502 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 01:03:56.773002   27502 kubeadm.go:310] 
	I0729 01:03:56.773106   27502 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 01:03:56.773114   27502 kubeadm.go:310] 
	I0729 01:03:56.773146   27502 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 01:03:56.773204   27502 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 01:03:56.773249   27502 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 01:03:56.773255   27502 kubeadm.go:310] 
	I0729 01:03:56.773301   27502 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 01:03:56.773307   27502 kubeadm.go:310] 
	I0729 01:03:56.773376   27502 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 01:03:56.773386   27502 kubeadm.go:310] 
	I0729 01:03:56.773451   27502 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 01:03:56.773560   27502 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 01:03:56.773650   27502 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 01:03:56.773659   27502 kubeadm.go:310] 
	I0729 01:03:56.773734   27502 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 01:03:56.773809   27502 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 01:03:56.773816   27502 kubeadm.go:310] 
	I0729 01:03:56.773888   27502 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wba6wh.0wq67cx7p2t5liwh \
	I0729 01:03:56.774002   27502 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2259b3e93c5dd9b5daf5a1af8e350826f214305256ac858c5baa518ad685cc90 \
	I0729 01:03:56.774033   27502 kubeadm.go:310] 	--control-plane 
	I0729 01:03:56.774047   27502 kubeadm.go:310] 
	I0729 01:03:56.774151   27502 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 01:03:56.774160   27502 kubeadm.go:310] 
	I0729 01:03:56.774237   27502 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wba6wh.0wq67cx7p2t5liwh \
	I0729 01:03:56.774347   27502 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2259b3e93c5dd9b5daf5a1af8e350826f214305256ac858c5baa518ad685cc90 
	I0729 01:03:56.774360   27502 cni.go:84] Creating CNI manager for ""
	I0729 01:03:56.774369   27502 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 01:03:56.776248   27502 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 01:03:56.777435   27502 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 01:03:56.782945   27502 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 01:03:56.782956   27502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 01:03:56.801793   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 01:03:57.177597   27502 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 01:03:57.177701   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:03:57.177740   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-845088 minikube.k8s.io/updated_at=2024_07_29T01_03_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=ha-845088 minikube.k8s.io/primary=true
	I0729 01:03:57.404875   27502 ops.go:34] apiserver oom_adj: -16
	I0729 01:03:57.404945   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:03:57.905271   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:03:58.405992   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:03:58.905034   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:03:59.405558   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:03:59.905146   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:00.405082   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:00.905104   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:01.405848   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:01.906077   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:02.405996   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:02.905569   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:03.405157   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:03.905634   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:04.405759   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:04.905414   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:05.405910   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:05.905062   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:06.405553   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:06.905623   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:07.405304   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:07.905076   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:08.405716   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:08.905707   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:09.405988   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 01:04:09.520575   27502 kubeadm.go:1113] duration metric: took 12.342932836s to wait for elevateKubeSystemPrivileges
	I0729 01:04:09.520618   27502 kubeadm.go:394] duration metric: took 24.62096883s to StartCluster
	I0729 01:04:09.520641   27502 settings.go:142] acquiring lock: {Name:mkb5968d4cb7e70e3ab5eb9e0fafacd5f2b8ffad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:04:09.520735   27502 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:04:09.521863   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/kubeconfig: {Name:mkfc86149281a82bb07035a854bdc5c590b97078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:04:09.522124   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 01:04:09.522122   27502 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:04:09.522150   27502 start.go:241] waiting for startup goroutines ...
	I0729 01:04:09.522167   27502 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 01:04:09.522262   27502 addons.go:69] Setting storage-provisioner=true in profile "ha-845088"
	I0729 01:04:09.522292   27502 addons.go:234] Setting addon storage-provisioner=true in "ha-845088"
	I0729 01:04:09.522318   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:04:09.522335   27502 addons.go:69] Setting default-storageclass=true in profile "ha-845088"
	I0729 01:04:09.522322   27502 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:04:09.522370   27502 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-845088"
	I0729 01:04:09.522817   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:09.522859   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:09.522873   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:09.522919   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:09.537493   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I0729 01:04:09.537661   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33073
	I0729 01:04:09.537989   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:09.538079   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:09.538502   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:09.538518   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:09.538502   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:09.538573   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:09.538879   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:09.538938   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:09.539054   27502 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:04:09.539521   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:09.539565   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:09.542321   27502 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:04:09.542583   27502 kapi.go:59] client config for ha-845088: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key", CAFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 01:04:09.543083   27502 cert_rotation.go:137] Starting client certificate rotation controller
	I0729 01:04:09.543284   27502 addons.go:234] Setting addon default-storageclass=true in "ha-845088"
	I0729 01:04:09.543320   27502 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:04:09.543599   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:09.543625   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:09.554984   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46581
	I0729 01:04:09.555471   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:09.556036   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:09.556068   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:09.556390   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:09.556562   27502 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:04:09.558407   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:04:09.558419   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45855
	I0729 01:04:09.558732   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:09.559174   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:09.559203   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:09.559500   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:09.559968   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:09.560000   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:09.560530   27502 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 01:04:09.561902   27502 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 01:04:09.561921   27502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 01:04:09.561938   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:04:09.565129   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:04:09.565561   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:04:09.565587   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:04:09.565736   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:04:09.565962   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:04:09.566199   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:04:09.566398   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:04:09.575135   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0729 01:04:09.575558   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:09.576098   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:09.576129   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:09.576464   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:09.576631   27502 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:04:09.578328   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:04:09.578541   27502 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 01:04:09.578557   27502 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 01:04:09.578574   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:04:09.581517   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:04:09.581935   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:04:09.581964   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:04:09.582084   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:04:09.582248   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:04:09.582389   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:04:09.582499   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:04:09.637293   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 01:04:09.695581   27502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 01:04:09.734937   27502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 01:04:10.056900   27502 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 01:04:10.369207   27502 main.go:141] libmachine: Making call to close driver server
	I0729 01:04:10.369235   27502 main.go:141] libmachine: (ha-845088) Calling .Close
	I0729 01:04:10.369209   27502 main.go:141] libmachine: Making call to close driver server
	I0729 01:04:10.369298   27502 main.go:141] libmachine: (ha-845088) Calling .Close
	I0729 01:04:10.369534   27502 main.go:141] libmachine: Successfully made call to close driver server
	I0729 01:04:10.369541   27502 main.go:141] libmachine: Successfully made call to close driver server
	I0729 01:04:10.369552   27502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 01:04:10.369555   27502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 01:04:10.369562   27502 main.go:141] libmachine: Making call to close driver server
	I0729 01:04:10.369564   27502 main.go:141] libmachine: Making call to close driver server
	I0729 01:04:10.369570   27502 main.go:141] libmachine: (ha-845088) Calling .Close
	I0729 01:04:10.369573   27502 main.go:141] libmachine: (ha-845088) Calling .Close
	I0729 01:04:10.369804   27502 main.go:141] libmachine: Successfully made call to close driver server
	I0729 01:04:10.369824   27502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 01:04:10.369841   27502 main.go:141] libmachine: Successfully made call to close driver server
	I0729 01:04:10.369849   27502 main.go:141] libmachine: (ha-845088) DBG | Closing plugin on server side
	I0729 01:04:10.369861   27502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 01:04:10.369989   27502 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0729 01:04:10.370001   27502 round_trippers.go:469] Request Headers:
	I0729 01:04:10.370011   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:04:10.370018   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:04:10.385217   27502 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0729 01:04:10.385736   27502 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0729 01:04:10.385749   27502 round_trippers.go:469] Request Headers:
	I0729 01:04:10.385757   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:04:10.385761   27502 round_trippers.go:473]     Content-Type: application/json
	I0729 01:04:10.385768   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:04:10.388548   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:04:10.388683   27502 main.go:141] libmachine: Making call to close driver server
	I0729 01:04:10.388694   27502 main.go:141] libmachine: (ha-845088) Calling .Close
	I0729 01:04:10.388943   27502 main.go:141] libmachine: Successfully made call to close driver server
	I0729 01:04:10.388976   27502 main.go:141] libmachine: (ha-845088) DBG | Closing plugin on server side
	I0729 01:04:10.388987   27502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 01:04:10.390661   27502 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 01:04:10.391903   27502 addons.go:510] duration metric: took 869.739884ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 01:04:10.391937   27502 start.go:246] waiting for cluster config update ...
	I0729 01:04:10.391949   27502 start.go:255] writing updated cluster config ...
	I0729 01:04:10.393367   27502 out.go:177] 
	I0729 01:04:10.394536   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:04:10.394621   27502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:04:10.396227   27502 out.go:177] * Starting "ha-845088-m02" control-plane node in "ha-845088" cluster
	I0729 01:04:10.397301   27502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:04:10.397323   27502 cache.go:56] Caching tarball of preloaded images
	I0729 01:04:10.397415   27502 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 01:04:10.397429   27502 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 01:04:10.397502   27502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:04:10.397661   27502 start.go:360] acquireMachinesLock for ha-845088-m02: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 01:04:10.397711   27502 start.go:364] duration metric: took 30.086µs to acquireMachinesLock for "ha-845088-m02"
	I0729 01:04:10.397735   27502 start.go:93] Provisioning new machine with config: &{Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:04:10.397824   27502 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0729 01:04:10.399120   27502 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 01:04:10.399205   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:10.399230   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:10.413793   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34327
	I0729 01:04:10.414280   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:10.414715   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:10.414740   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:10.415137   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:10.415314   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetMachineName
	I0729 01:04:10.415431   27502 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:04:10.415555   27502 start.go:159] libmachine.API.Create for "ha-845088" (driver="kvm2")
	I0729 01:04:10.415578   27502 client.go:168] LocalClient.Create starting
	I0729 01:04:10.415604   27502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem
	I0729 01:04:10.415633   27502 main.go:141] libmachine: Decoding PEM data...
	I0729 01:04:10.415647   27502 main.go:141] libmachine: Parsing certificate...
	I0729 01:04:10.415693   27502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem
	I0729 01:04:10.415711   27502 main.go:141] libmachine: Decoding PEM data...
	I0729 01:04:10.415721   27502 main.go:141] libmachine: Parsing certificate...
	I0729 01:04:10.415738   27502 main.go:141] libmachine: Running pre-create checks...
	I0729 01:04:10.415746   27502 main.go:141] libmachine: (ha-845088-m02) Calling .PreCreateCheck
	I0729 01:04:10.415902   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetConfigRaw
	I0729 01:04:10.416254   27502 main.go:141] libmachine: Creating machine...
	I0729 01:04:10.416267   27502 main.go:141] libmachine: (ha-845088-m02) Calling .Create
	I0729 01:04:10.416379   27502 main.go:141] libmachine: (ha-845088-m02) Creating KVM machine...
	I0729 01:04:10.417469   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found existing default KVM network
	I0729 01:04:10.417609   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found existing private KVM network mk-ha-845088
	I0729 01:04:10.417725   27502 main.go:141] libmachine: (ha-845088-m02) Setting up store path in /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02 ...
	I0729 01:04:10.417758   27502 main.go:141] libmachine: (ha-845088-m02) Building disk image from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 01:04:10.417797   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:10.417712   27901 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:04:10.417879   27502 main.go:141] libmachine: (ha-845088-m02) Downloading /home/jenkins/minikube-integration/19312-9421/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 01:04:10.644430   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:10.644272   27901 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa...
	I0729 01:04:10.979532   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:10.979397   27901 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/ha-845088-m02.rawdisk...
	I0729 01:04:10.979570   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Writing magic tar header
	I0729 01:04:10.979585   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Writing SSH key tar header
	I0729 01:04:10.979597   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:10.979541   27901 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02 ...
	I0729 01:04:10.979699   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02
	I0729 01:04:10.979729   27502 main.go:141] libmachine: (ha-845088-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02 (perms=drwx------)
	I0729 01:04:10.979740   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines
	I0729 01:04:10.979755   27502 main.go:141] libmachine: (ha-845088-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines (perms=drwxr-xr-x)
	I0729 01:04:10.979772   27502 main.go:141] libmachine: (ha-845088-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube (perms=drwxr-xr-x)
	I0729 01:04:10.979782   27502 main.go:141] libmachine: (ha-845088-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421 (perms=drwxrwxr-x)
	I0729 01:04:10.979791   27502 main.go:141] libmachine: (ha-845088-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 01:04:10.979802   27502 main.go:141] libmachine: (ha-845088-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 01:04:10.979811   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:04:10.979818   27502 main.go:141] libmachine: (ha-845088-m02) Creating domain...
	I0729 01:04:10.979845   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421
	I0729 01:04:10.979868   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 01:04:10.979881   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Checking permissions on dir: /home/jenkins
	I0729 01:04:10.979896   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Checking permissions on dir: /home
	I0729 01:04:10.979910   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Skipping /home - not owner
	I0729 01:04:10.980692   27502 main.go:141] libmachine: (ha-845088-m02) define libvirt domain using xml: 
	I0729 01:04:10.980713   27502 main.go:141] libmachine: (ha-845088-m02) <domain type='kvm'>
	I0729 01:04:10.980725   27502 main.go:141] libmachine: (ha-845088-m02)   <name>ha-845088-m02</name>
	I0729 01:04:10.980730   27502 main.go:141] libmachine: (ha-845088-m02)   <memory unit='MiB'>2200</memory>
	I0729 01:04:10.980736   27502 main.go:141] libmachine: (ha-845088-m02)   <vcpu>2</vcpu>
	I0729 01:04:10.980747   27502 main.go:141] libmachine: (ha-845088-m02)   <features>
	I0729 01:04:10.980753   27502 main.go:141] libmachine: (ha-845088-m02)     <acpi/>
	I0729 01:04:10.980762   27502 main.go:141] libmachine: (ha-845088-m02)     <apic/>
	I0729 01:04:10.980771   27502 main.go:141] libmachine: (ha-845088-m02)     <pae/>
	I0729 01:04:10.980781   27502 main.go:141] libmachine: (ha-845088-m02)     
	I0729 01:04:10.980790   27502 main.go:141] libmachine: (ha-845088-m02)   </features>
	I0729 01:04:10.980798   27502 main.go:141] libmachine: (ha-845088-m02)   <cpu mode='host-passthrough'>
	I0729 01:04:10.980804   27502 main.go:141] libmachine: (ha-845088-m02)   
	I0729 01:04:10.980812   27502 main.go:141] libmachine: (ha-845088-m02)   </cpu>
	I0729 01:04:10.980817   27502 main.go:141] libmachine: (ha-845088-m02)   <os>
	I0729 01:04:10.980824   27502 main.go:141] libmachine: (ha-845088-m02)     <type>hvm</type>
	I0729 01:04:10.980837   27502 main.go:141] libmachine: (ha-845088-m02)     <boot dev='cdrom'/>
	I0729 01:04:10.980850   27502 main.go:141] libmachine: (ha-845088-m02)     <boot dev='hd'/>
	I0729 01:04:10.980876   27502 main.go:141] libmachine: (ha-845088-m02)     <bootmenu enable='no'/>
	I0729 01:04:10.980891   27502 main.go:141] libmachine: (ha-845088-m02)   </os>
	I0729 01:04:10.980900   27502 main.go:141] libmachine: (ha-845088-m02)   <devices>
	I0729 01:04:10.980907   27502 main.go:141] libmachine: (ha-845088-m02)     <disk type='file' device='cdrom'>
	I0729 01:04:10.980942   27502 main.go:141] libmachine: (ha-845088-m02)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/boot2docker.iso'/>
	I0729 01:04:10.980970   27502 main.go:141] libmachine: (ha-845088-m02)       <target dev='hdc' bus='scsi'/>
	I0729 01:04:10.980981   27502 main.go:141] libmachine: (ha-845088-m02)       <readonly/>
	I0729 01:04:10.980991   27502 main.go:141] libmachine: (ha-845088-m02)     </disk>
	I0729 01:04:10.981004   27502 main.go:141] libmachine: (ha-845088-m02)     <disk type='file' device='disk'>
	I0729 01:04:10.981012   27502 main.go:141] libmachine: (ha-845088-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 01:04:10.981039   27502 main.go:141] libmachine: (ha-845088-m02)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/ha-845088-m02.rawdisk'/>
	I0729 01:04:10.981058   27502 main.go:141] libmachine: (ha-845088-m02)       <target dev='hda' bus='virtio'/>
	I0729 01:04:10.981066   27502 main.go:141] libmachine: (ha-845088-m02)     </disk>
	I0729 01:04:10.981074   27502 main.go:141] libmachine: (ha-845088-m02)     <interface type='network'>
	I0729 01:04:10.981081   27502 main.go:141] libmachine: (ha-845088-m02)       <source network='mk-ha-845088'/>
	I0729 01:04:10.981088   27502 main.go:141] libmachine: (ha-845088-m02)       <model type='virtio'/>
	I0729 01:04:10.981093   27502 main.go:141] libmachine: (ha-845088-m02)     </interface>
	I0729 01:04:10.981100   27502 main.go:141] libmachine: (ha-845088-m02)     <interface type='network'>
	I0729 01:04:10.981110   27502 main.go:141] libmachine: (ha-845088-m02)       <source network='default'/>
	I0729 01:04:10.981116   27502 main.go:141] libmachine: (ha-845088-m02)       <model type='virtio'/>
	I0729 01:04:10.981143   27502 main.go:141] libmachine: (ha-845088-m02)     </interface>
	I0729 01:04:10.981168   27502 main.go:141] libmachine: (ha-845088-m02)     <serial type='pty'>
	I0729 01:04:10.981181   27502 main.go:141] libmachine: (ha-845088-m02)       <target port='0'/>
	I0729 01:04:10.981192   27502 main.go:141] libmachine: (ha-845088-m02)     </serial>
	I0729 01:04:10.981203   27502 main.go:141] libmachine: (ha-845088-m02)     <console type='pty'>
	I0729 01:04:10.981210   27502 main.go:141] libmachine: (ha-845088-m02)       <target type='serial' port='0'/>
	I0729 01:04:10.981218   27502 main.go:141] libmachine: (ha-845088-m02)     </console>
	I0729 01:04:10.981229   27502 main.go:141] libmachine: (ha-845088-m02)     <rng model='virtio'>
	I0729 01:04:10.981243   27502 main.go:141] libmachine: (ha-845088-m02)       <backend model='random'>/dev/random</backend>
	I0729 01:04:10.981257   27502 main.go:141] libmachine: (ha-845088-m02)     </rng>
	I0729 01:04:10.981268   27502 main.go:141] libmachine: (ha-845088-m02)     
	I0729 01:04:10.981275   27502 main.go:141] libmachine: (ha-845088-m02)     
	I0729 01:04:10.981283   27502 main.go:141] libmachine: (ha-845088-m02)   </devices>
	I0729 01:04:10.981291   27502 main.go:141] libmachine: (ha-845088-m02) </domain>
	I0729 01:04:10.981300   27502 main.go:141] libmachine: (ha-845088-m02) 
	I0729 01:04:10.987788   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:32:4f:7d in network default
	I0729 01:04:10.988325   27502 main.go:141] libmachine: (ha-845088-m02) Ensuring networks are active...
	I0729 01:04:10.988347   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:10.989028   27502 main.go:141] libmachine: (ha-845088-m02) Ensuring network default is active
	I0729 01:04:10.989314   27502 main.go:141] libmachine: (ha-845088-m02) Ensuring network mk-ha-845088 is active
	I0729 01:04:10.989668   27502 main.go:141] libmachine: (ha-845088-m02) Getting domain xml...
	I0729 01:04:10.990320   27502 main.go:141] libmachine: (ha-845088-m02) Creating domain...
	I0729 01:04:12.182945   27502 main.go:141] libmachine: (ha-845088-m02) Waiting to get IP...
	I0729 01:04:12.184588   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:12.185226   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:12.185256   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:12.185168   27901 retry.go:31] will retry after 289.198233ms: waiting for machine to come up
	I0729 01:04:12.475541   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:12.476018   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:12.476042   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:12.475983   27901 retry.go:31] will retry after 317.394957ms: waiting for machine to come up
	I0729 01:04:12.795522   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:12.796068   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:12.796088   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:12.796026   27901 retry.go:31] will retry after 457.114248ms: waiting for machine to come up
	I0729 01:04:13.254701   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:13.255194   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:13.255224   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:13.255144   27901 retry.go:31] will retry after 595.132323ms: waiting for machine to come up
	I0729 01:04:13.851663   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:13.852282   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:13.852312   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:13.852240   27901 retry.go:31] will retry after 708.119901ms: waiting for machine to come up
	I0729 01:04:14.561481   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:14.561948   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:14.561978   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:14.561907   27901 retry.go:31] will retry after 788.634973ms: waiting for machine to come up
	I0729 01:04:15.352321   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:15.352863   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:15.352909   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:15.352829   27901 retry.go:31] will retry after 857.746874ms: waiting for machine to come up
	I0729 01:04:16.212356   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:16.212882   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:16.212908   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:16.212819   27901 retry.go:31] will retry after 1.465191331s: waiting for machine to come up
	I0729 01:04:17.679291   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:17.679628   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:17.679650   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:17.679594   27901 retry.go:31] will retry after 1.514834108s: waiting for machine to come up
	I0729 01:04:19.196241   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:19.196710   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:19.196739   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:19.196671   27901 retry.go:31] will retry after 1.789332149s: waiting for machine to come up
	I0729 01:04:20.987779   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:20.988128   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:20.988159   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:20.988100   27901 retry.go:31] will retry after 1.88591588s: waiting for machine to come up
	I0729 01:04:22.875421   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:22.875995   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:22.876037   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:22.875919   27901 retry.go:31] will retry after 2.781831956s: waiting for machine to come up
	I0729 01:04:25.659223   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:25.659731   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:25.659753   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:25.659692   27901 retry.go:31] will retry after 4.514403237s: waiting for machine to come up
	I0729 01:04:30.179257   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:30.179627   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find current IP address of domain ha-845088-m02 in network mk-ha-845088
	I0729 01:04:30.179669   27502 main.go:141] libmachine: (ha-845088-m02) DBG | I0729 01:04:30.179609   27901 retry.go:31] will retry after 3.951493535s: waiting for machine to come up
	I0729 01:04:34.135729   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.136242   27502 main.go:141] libmachine: (ha-845088-m02) Found IP for machine: 192.168.39.68
	I0729 01:04:34.136267   27502 main.go:141] libmachine: (ha-845088-m02) Reserving static IP address...
	I0729 01:04:34.136282   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has current primary IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.136599   27502 main.go:141] libmachine: (ha-845088-m02) DBG | unable to find host DHCP lease matching {name: "ha-845088-m02", mac: "52:54:00:d1:55:54", ip: "192.168.39.68"} in network mk-ha-845088
	I0729 01:04:34.206318   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Getting to WaitForSSH function...
	I0729 01:04:34.206347   27502 main.go:141] libmachine: (ha-845088-m02) Reserved static IP address: 192.168.39.68
	I0729 01:04:34.206360   27502 main.go:141] libmachine: (ha-845088-m02) Waiting for SSH to be available...
	I0729 01:04:34.209076   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.209598   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:34.209625   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.209808   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Using SSH client type: external
	I0729 01:04:34.209833   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa (-rw-------)
	I0729 01:04:34.209861   27502 main.go:141] libmachine: (ha-845088-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 01:04:34.209879   27502 main.go:141] libmachine: (ha-845088-m02) DBG | About to run SSH command:
	I0729 01:04:34.209894   27502 main.go:141] libmachine: (ha-845088-m02) DBG | exit 0
	I0729 01:04:34.335648   27502 main.go:141] libmachine: (ha-845088-m02) DBG | SSH cmd err, output: <nil>: 
	I0729 01:04:34.335865   27502 main.go:141] libmachine: (ha-845088-m02) KVM machine creation complete!
	I0729 01:04:34.336175   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetConfigRaw
	I0729 01:04:34.336689   27502 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:04:34.336879   27502 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:04:34.337010   27502 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 01:04:34.337026   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetState
	I0729 01:04:34.338199   27502 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 01:04:34.338219   27502 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 01:04:34.338228   27502 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 01:04:34.338237   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:34.340382   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.340729   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:34.340754   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.341043   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:34.341277   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:34.341439   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:34.341584   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:34.341769   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:04:34.342006   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 01:04:34.342025   27502 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 01:04:34.446590   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:04:34.446617   27502 main.go:141] libmachine: Detecting the provisioner...
	I0729 01:04:34.446628   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:34.449226   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.449570   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:34.449591   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.449705   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:34.449904   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:34.450093   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:34.450264   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:34.450445   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:04:34.450649   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 01:04:34.450662   27502 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 01:04:34.556017   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 01:04:34.556118   27502 main.go:141] libmachine: found compatible host: buildroot
	I0729 01:04:34.556130   27502 main.go:141] libmachine: Provisioning with buildroot...
	I0729 01:04:34.556141   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetMachineName
	I0729 01:04:34.556413   27502 buildroot.go:166] provisioning hostname "ha-845088-m02"
	I0729 01:04:34.556438   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetMachineName
	I0729 01:04:34.556610   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:34.559430   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.559805   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:34.559832   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.560062   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:34.560261   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:34.560412   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:34.560537   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:34.560678   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:04:34.560890   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 01:04:34.560907   27502 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-845088-m02 && echo "ha-845088-m02" | sudo tee /etc/hostname
	I0729 01:04:34.677351   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-845088-m02
	
	I0729 01:04:34.677380   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:34.680212   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.680548   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:34.680575   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.680755   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:34.680928   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:34.681080   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:34.681209   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:34.681350   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:04:34.681506   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 01:04:34.681522   27502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-845088-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-845088-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-845088-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 01:04:34.792246   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:04:34.792273   27502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 01:04:34.792292   27502 buildroot.go:174] setting up certificates
	I0729 01:04:34.792303   27502 provision.go:84] configureAuth start
	I0729 01:04:34.792315   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetMachineName
	I0729 01:04:34.792569   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetIP
	I0729 01:04:34.795284   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.795671   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:34.795699   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.795824   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:34.797710   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.797967   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:34.797993   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:34.798095   27502 provision.go:143] copyHostCerts
	I0729 01:04:34.798122   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:04:34.798158   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 01:04:34.798167   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:04:34.798234   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 01:04:34.798301   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:04:34.798318   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 01:04:34.798324   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:04:34.798348   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 01:04:34.798390   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:04:34.798407   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 01:04:34.798413   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:04:34.798434   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 01:04:34.798480   27502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.ha-845088-m02 san=[127.0.0.1 192.168.39.68 ha-845088-m02 localhost minikube]
	I0729 01:04:35.036834   27502 provision.go:177] copyRemoteCerts
	I0729 01:04:35.036891   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 01:04:35.036911   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:35.039512   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.039790   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.039819   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.040005   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:35.040184   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:35.040319   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:35.040421   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa Username:docker}
	I0729 01:04:35.121478   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 01:04:35.121541   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 01:04:35.147403   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 01:04:35.147482   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 01:04:35.171398   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 01:04:35.171458   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 01:04:35.194919   27502 provision.go:87] duration metric: took 402.603951ms to configureAuth
	I0729 01:04:35.194943   27502 buildroot.go:189] setting minikube options for container-runtime
	I0729 01:04:35.195111   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:04:35.195176   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:35.197932   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.198294   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.198322   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.198505   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:35.198686   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:35.198846   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:35.198950   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:35.199139   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:04:35.199314   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 01:04:35.199329   27502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 01:04:35.467956   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 01:04:35.467990   27502 main.go:141] libmachine: Checking connection to Docker...
	I0729 01:04:35.468001   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetURL
	I0729 01:04:35.469282   27502 main.go:141] libmachine: (ha-845088-m02) DBG | Using libvirt version 6000000
	I0729 01:04:35.471402   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.471736   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.471765   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.471923   27502 main.go:141] libmachine: Docker is up and running!
	I0729 01:04:35.471936   27502 main.go:141] libmachine: Reticulating splines...
	I0729 01:04:35.471943   27502 client.go:171] duration metric: took 25.056359047s to LocalClient.Create
	I0729 01:04:35.471961   27502 start.go:167] duration metric: took 25.056408542s to libmachine.API.Create "ha-845088"
	I0729 01:04:35.471974   27502 start.go:293] postStartSetup for "ha-845088-m02" (driver="kvm2")
	I0729 01:04:35.471987   27502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 01:04:35.472009   27502 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:04:35.472220   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 01:04:35.472242   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:35.474192   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.474431   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.474459   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.474570   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:35.474750   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:35.474866   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:35.475045   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa Username:docker}
	I0729 01:04:35.557194   27502 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 01:04:35.561234   27502 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 01:04:35.561256   27502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 01:04:35.561323   27502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 01:04:35.561414   27502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 01:04:35.561424   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /etc/ssl/certs/166232.pem
	I0729 01:04:35.561525   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 01:04:35.570745   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:04:35.593786   27502 start.go:296] duration metric: took 121.798873ms for postStartSetup
	I0729 01:04:35.593836   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetConfigRaw
	I0729 01:04:35.594401   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetIP
	I0729 01:04:35.597013   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.597369   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.597398   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.597589   27502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:04:35.597813   27502 start.go:128] duration metric: took 25.199969681s to createHost
	I0729 01:04:35.597845   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:35.600163   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.600510   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.600536   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.600698   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:35.600881   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:35.601041   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:35.601172   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:35.601350   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:04:35.601538   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0729 01:04:35.601548   27502 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 01:04:35.707526   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722215075.684293180
	
	I0729 01:04:35.707549   27502 fix.go:216] guest clock: 1722215075.684293180
	I0729 01:04:35.707556   27502 fix.go:229] Guest: 2024-07-29 01:04:35.68429318 +0000 UTC Remote: 2024-07-29 01:04:35.597827637 +0000 UTC m=+83.514117948 (delta=86.465543ms)
	I0729 01:04:35.707570   27502 fix.go:200] guest clock delta is within tolerance: 86.465543ms
	I0729 01:04:35.707575   27502 start.go:83] releasing machines lock for "ha-845088-m02", held for 25.30985305s
	I0729 01:04:35.707595   27502 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:04:35.707845   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetIP
	I0729 01:04:35.710561   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.710961   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.710984   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.713187   27502 out.go:177] * Found network options:
	I0729 01:04:35.714471   27502 out.go:177]   - NO_PROXY=192.168.39.69
	W0729 01:04:35.715649   27502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 01:04:35.715675   27502 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:04:35.716140   27502 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:04:35.716317   27502 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:04:35.716367   27502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 01:04:35.716410   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	W0729 01:04:35.716628   27502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 01:04:35.716681   27502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 01:04:35.716695   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:04:35.719117   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.719360   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.719521   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.719544   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.719698   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:35.719845   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:35.719863   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:35.719878   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:35.720027   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:35.720040   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:04:35.720196   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:04:35.720216   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa Username:docker}
	I0729 01:04:35.720330   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:04:35.720463   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa Username:docker}
	I0729 01:04:35.955395   27502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 01:04:35.961747   27502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 01:04:35.961805   27502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 01:04:35.978705   27502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 01:04:35.978725   27502 start.go:495] detecting cgroup driver to use...
	I0729 01:04:35.978788   27502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 01:04:35.995273   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 01:04:36.010704   27502 docker.go:217] disabling cri-docker service (if available) ...
	I0729 01:04:36.010758   27502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 01:04:36.026154   27502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 01:04:36.040175   27502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 01:04:36.165262   27502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 01:04:36.299726   27502 docker.go:233] disabling docker service ...
	I0729 01:04:36.299803   27502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 01:04:36.314101   27502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 01:04:36.327248   27502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 01:04:36.456152   27502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 01:04:36.577668   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 01:04:36.591512   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 01:04:36.610337   27502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 01:04:36.610404   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:04:36.620949   27502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 01:04:36.621005   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:04:36.632188   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:04:36.642694   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:04:36.653500   27502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 01:04:36.664444   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:04:36.674944   27502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:04:36.694016   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:04:36.704389   27502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 01:04:36.713960   27502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 01:04:36.714007   27502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 01:04:36.727754   27502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 01:04:36.737464   27502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:04:36.859547   27502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 01:04:36.996403   27502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 01:04:36.996499   27502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 01:04:37.001249   27502 start.go:563] Will wait 60s for crictl version
	I0729 01:04:37.001303   27502 ssh_runner.go:195] Run: which crictl
	I0729 01:04:37.005610   27502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 01:04:37.045547   27502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 01:04:37.045627   27502 ssh_runner.go:195] Run: crio --version
	I0729 01:04:37.074592   27502 ssh_runner.go:195] Run: crio --version
	I0729 01:04:37.102962   27502 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 01:04:37.104421   27502 out.go:177]   - env NO_PROXY=192.168.39.69
	I0729 01:04:37.105582   27502 main.go:141] libmachine: (ha-845088-m02) Calling .GetIP
	I0729 01:04:37.107871   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:37.108222   27502 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:04:25 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:04:37.108248   27502 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:04:37.108402   27502 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 01:04:37.112665   27502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 01:04:37.125685   27502 mustload.go:65] Loading cluster: ha-845088
	I0729 01:04:37.125881   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:04:37.126169   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:37.126203   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:37.140417   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43583
	I0729 01:04:37.140801   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:37.141198   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:37.141218   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:37.141588   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:37.141762   27502 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:04:37.143494   27502 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:04:37.143750   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:37.143771   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:37.158014   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41759
	I0729 01:04:37.158521   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:37.158966   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:37.158985   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:37.159254   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:37.159435   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:04:37.159573   27502 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088 for IP: 192.168.39.68
	I0729 01:04:37.159585   27502 certs.go:194] generating shared ca certs ...
	I0729 01:04:37.159602   27502 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:04:37.159714   27502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 01:04:37.159751   27502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 01:04:37.159759   27502 certs.go:256] generating profile certs ...
	I0729 01:04:37.159831   27502 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key
	I0729 01:04:37.159855   27502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.a064a713
	I0729 01:04:37.159869   27502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.a064a713 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.68 192.168.39.254]
	I0729 01:04:37.366318   27502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.a064a713 ...
	I0729 01:04:37.366347   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.a064a713: {Name:mkb24bcdc8ee02409df18eff5a4bc131d770117c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:04:37.366509   27502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.a064a713 ...
	I0729 01:04:37.366523   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.a064a713: {Name:mkd96f5a1ff15a4d77eca684ce230f7e1fbf5165 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:04:37.366588   27502 certs.go:381] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.a064a713 -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt
	I0729 01:04:37.366714   27502 certs.go:385] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.a064a713 -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key
	I0729 01:04:37.366841   27502 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key
	I0729 01:04:37.366855   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 01:04:37.366868   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 01:04:37.366880   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 01:04:37.366892   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 01:04:37.366902   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 01:04:37.366912   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 01:04:37.366924   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 01:04:37.366933   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 01:04:37.366979   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 01:04:37.367011   27502 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 01:04:37.367020   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 01:04:37.367041   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 01:04:37.367095   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 01:04:37.367121   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 01:04:37.367160   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:04:37.367186   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:04:37.367200   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem -> /usr/share/ca-certificates/16623.pem
	I0729 01:04:37.367211   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /usr/share/ca-certificates/166232.pem
	I0729 01:04:37.367240   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:04:37.369968   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:04:37.370348   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:04:37.370376   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:04:37.370525   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:04:37.370724   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:04:37.370875   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:04:37.371022   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:04:37.443519   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 01:04:37.449169   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 01:04:37.460468   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 01:04:37.465134   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0729 01:04:37.475154   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 01:04:37.480107   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 01:04:37.489975   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 01:04:37.494197   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 01:04:37.503857   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 01:04:37.507993   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 01:04:37.517672   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 01:04:37.521671   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 01:04:37.531381   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 01:04:37.556947   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 01:04:37.582241   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 01:04:37.606378   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 01:04:37.630339   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 01:04:37.654670   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 01:04:37.679162   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 01:04:37.702464   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 01:04:37.725172   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 01:04:37.747400   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 01:04:37.770381   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 01:04:37.794773   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 01:04:37.811117   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0729 01:04:37.828998   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 01:04:37.845558   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 01:04:37.862898   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 01:04:37.880723   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 01:04:37.898535   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 01:04:37.916403   27502 ssh_runner.go:195] Run: openssl version
	I0729 01:04:37.922573   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 01:04:37.933193   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:04:37.937486   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:04:37.937526   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:04:37.943480   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 01:04:37.953945   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 01:04:37.964326   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 01:04:37.969058   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 01:04:37.969114   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 01:04:37.974785   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 01:04:37.985091   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 01:04:37.995496   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 01:04:37.999794   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 01:04:37.999843   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 01:04:38.005263   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 01:04:38.015719   27502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 01:04:38.019698   27502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 01:04:38.019753   27502 kubeadm.go:934] updating node {m02 192.168.39.68 8443 v1.30.3 crio true true} ...
	I0729 01:04:38.019860   27502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-845088-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 01:04:38.019888   27502 kube-vip.go:115] generating kube-vip config ...
	I0729 01:04:38.019916   27502 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 01:04:38.036187   27502 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 01:04:38.036242   27502 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 01:04:38.036296   27502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 01:04:38.045421   27502 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 01:04:38.045477   27502 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 01:04:38.054460   27502 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 01:04:38.054487   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 01:04:38.054529   27502 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0729 01:04:38.054560   27502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 01:04:38.054563   27502 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0729 01:04:38.058519   27502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 01:04:38.058543   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 01:04:44.238102   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 01:04:44.238176   27502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 01:04:44.243414   27502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 01:04:44.243449   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 01:04:52.957119   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:04:52.972545   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 01:04:52.972661   27502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 01:04:52.977067   27502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 01:04:52.977105   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 01:04:53.394975   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 01:04:53.404083   27502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 01:04:53.421595   27502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 01:04:53.438149   27502 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 01:04:53.455744   27502 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 01:04:53.459918   27502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 01:04:53.473295   27502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:04:53.615633   27502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:04:53.634677   27502 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:04:53.635164   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:04:53.635255   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:04:53.649859   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39703
	I0729 01:04:53.650272   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:04:53.650691   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:04:53.650712   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:04:53.651022   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:04:53.651200   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:04:53.651345   27502 start.go:317] joinCluster: &{Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:04:53.651456   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 01:04:53.651477   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:04:53.654468   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:04:53.654872   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:04:53.654910   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:04:53.655011   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:04:53.655191   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:04:53.655358   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:04:53.655488   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:04:53.818865   27502 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:04:53.818917   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x9fpyd.eiwyuo54sxlezpb0 --discovery-token-ca-cert-hash sha256:2259b3e93c5dd9b5daf5a1af8e350826f214305256ac858c5baa518ad685cc90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-845088-m02 --control-plane --apiserver-advertise-address=192.168.39.68 --apiserver-bind-port=8443"
	I0729 01:05:15.717189   27502 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x9fpyd.eiwyuo54sxlezpb0 --discovery-token-ca-cert-hash sha256:2259b3e93c5dd9b5daf5a1af8e350826f214305256ac858c5baa518ad685cc90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-845088-m02 --control-plane --apiserver-advertise-address=192.168.39.68 --apiserver-bind-port=8443": (21.898248177s)
	I0729 01:05:15.717229   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 01:05:16.198226   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-845088-m02 minikube.k8s.io/updated_at=2024_07_29T01_05_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=ha-845088 minikube.k8s.io/primary=false
	I0729 01:05:16.348443   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-845088-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 01:05:16.496787   27502 start.go:319] duration metric: took 22.845446497s to joinCluster
	I0729 01:05:16.496888   27502 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:05:16.497205   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:05:16.498446   27502 out.go:177] * Verifying Kubernetes components...
	I0729 01:05:16.499848   27502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:05:16.736731   27502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:05:16.792115   27502 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:05:16.792330   27502 kapi.go:59] client config for ha-845088: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key", CAFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 01:05:16.792382   27502 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.69:8443
	I0729 01:05:16.792556   27502 node_ready.go:35] waiting up to 6m0s for node "ha-845088-m02" to be "Ready" ...
	I0729 01:05:16.792631   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:16.792638   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:16.792646   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:16.792653   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:16.805176   27502 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0729 01:05:17.293652   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:17.293677   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:17.293686   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:17.293691   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:17.298666   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:05:17.792894   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:17.792915   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:17.792923   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:17.792927   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:17.797632   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:05:18.292859   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:18.292882   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:18.292903   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:18.292916   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:18.298491   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:05:18.792761   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:18.792793   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:18.792803   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:18.792809   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:18.796706   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:18.797543   27502 node_ready.go:53] node "ha-845088-m02" has status "Ready":"False"
	I0729 01:05:19.293806   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:19.293833   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:19.293841   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:19.293847   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:19.296836   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:05:19.793097   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:19.793132   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:19.793140   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:19.793145   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:19.796615   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:20.292813   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:20.292835   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:20.292847   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:20.292854   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:20.296002   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:20.792927   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:20.792954   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:20.792966   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:20.792971   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:20.796438   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:21.293457   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:21.293478   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:21.293485   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:21.293488   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:21.297416   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:21.298137   27502 node_ready.go:53] node "ha-845088-m02" has status "Ready":"False"
	I0729 01:05:21.793678   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:21.793701   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:21.793713   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:21.793722   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:21.797251   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:22.292758   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:22.292778   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:22.292788   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:22.292794   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:22.296502   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:22.793213   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:22.793234   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:22.793242   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:22.793246   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:22.796581   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:23.293534   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:23.293557   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:23.293565   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:23.293569   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:23.333422   27502 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I0729 01:05:23.334323   27502 node_ready.go:53] node "ha-845088-m02" has status "Ready":"False"
	I0729 01:05:23.792777   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:23.792803   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:23.792816   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:23.792822   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:23.796119   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:24.293257   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:24.293283   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:24.293293   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:24.293299   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:24.297444   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:05:24.793639   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:24.793660   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:24.793668   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:24.793672   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:24.797083   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:25.293390   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:25.293417   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:25.293429   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:25.293435   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:25.296744   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:25.793336   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:25.793360   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:25.793369   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:25.793376   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:25.796757   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:25.797450   27502 node_ready.go:53] node "ha-845088-m02" has status "Ready":"False"
	I0729 01:05:26.292834   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:26.292856   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:26.292864   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:26.292867   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:26.297284   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:05:26.792729   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:26.792751   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:26.792759   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:26.792763   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:26.796484   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:27.293422   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:27.293441   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:27.293449   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:27.293453   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:27.296541   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:27.793726   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:27.793750   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:27.793760   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:27.793766   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:27.796949   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:27.797522   27502 node_ready.go:53] node "ha-845088-m02" has status "Ready":"False"
	I0729 01:05:28.292921   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:28.292940   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:28.292949   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:28.292954   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:28.297106   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:05:28.792682   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:28.792716   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:28.792724   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:28.792728   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:28.796177   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:29.293024   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:29.293045   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:29.293053   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:29.293058   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:29.296525   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:29.792713   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:29.792733   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:29.792742   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:29.792748   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:29.796602   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:30.293344   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:30.293366   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:30.293374   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:30.293379   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:30.296981   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:30.297733   27502 node_ready.go:53] node "ha-845088-m02" has status "Ready":"False"
	I0729 01:05:30.793625   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:30.793648   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:30.793656   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:30.793660   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:30.796833   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:31.292846   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:31.292876   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:31.292887   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:31.292892   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:31.296198   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:31.793426   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:31.793449   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:31.793456   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:31.793459   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:31.796583   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:32.293141   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:32.293168   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:32.293178   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:32.293184   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:32.296333   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:32.793529   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:32.793554   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:32.793562   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:32.793566   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:32.797264   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:32.797782   27502 node_ready.go:53] node "ha-845088-m02" has status "Ready":"False"
	I0729 01:05:33.293165   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:33.293184   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:33.293193   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:33.293196   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:33.298367   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:05:33.793732   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:33.793753   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:33.793761   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:33.793766   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:33.797430   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:34.293421   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:34.293442   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:34.293450   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:34.293455   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:34.296911   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:34.793537   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:34.793559   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:34.793567   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:34.793570   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:34.796836   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:35.293492   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:35.293519   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.293529   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.293532   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.297077   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:35.298065   27502 node_ready.go:49] node "ha-845088-m02" has status "Ready":"True"
	I0729 01:05:35.298094   27502 node_ready.go:38] duration metric: took 18.50551754s for node "ha-845088-m02" to be "Ready" ...
	I0729 01:05:35.298105   27502 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 01:05:35.298175   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:05:35.298189   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.298199   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.298206   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.302895   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:05:35.308915   27502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-26phs" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.308984   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-26phs
	I0729 01:05:35.308989   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.308999   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.309006   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.312730   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:35.313304   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:35.313321   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.313328   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.313333   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.315704   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:05:35.316180   27502 pod_ready.go:92] pod "coredns-7db6d8ff4d-26phs" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:35.316205   27502 pod_ready.go:81] duration metric: took 7.266995ms for pod "coredns-7db6d8ff4d-26phs" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.316218   27502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x4jjj" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.316269   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x4jjj
	I0729 01:05:35.316277   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.316283   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.316288   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.318653   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:05:35.319250   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:35.319262   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.319268   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.319273   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.321625   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:05:35.322017   27502 pod_ready.go:92] pod "coredns-7db6d8ff4d-x4jjj" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:35.322031   27502 pod_ready.go:81] duration metric: took 5.802907ms for pod "coredns-7db6d8ff4d-x4jjj" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.322041   27502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.322090   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-845088
	I0729 01:05:35.322099   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.322109   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.322112   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.324190   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:05:35.324643   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:35.324657   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.324666   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.324673   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.326735   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:05:35.327246   27502 pod_ready.go:92] pod "etcd-ha-845088" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:35.327260   27502 pod_ready.go:81] duration metric: took 5.212634ms for pod "etcd-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.327267   27502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.327310   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-845088-m02
	I0729 01:05:35.327320   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.327328   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.327333   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.329466   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:05:35.329979   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:35.329991   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.329997   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.330002   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.331992   27502 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 01:05:35.332520   27502 pod_ready.go:92] pod "etcd-ha-845088-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:35.332535   27502 pod_ready.go:81] duration metric: took 5.262005ms for pod "etcd-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.332550   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.494011   27502 request.go:629] Waited for 161.401722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088
	I0729 01:05:35.494091   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088
	I0729 01:05:35.494098   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.494109   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.494118   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.497476   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:35.694573   27502 request.go:629] Waited for 196.374942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:35.694635   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:35.694647   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.694657   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.694664   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.697739   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:35.698301   27502 pod_ready.go:92] pod "kube-apiserver-ha-845088" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:35.698316   27502 pod_ready.go:81] duration metric: took 365.759555ms for pod "kube-apiserver-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.698324   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:35.894470   27502 request.go:629] Waited for 196.093447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088-m02
	I0729 01:05:35.894558   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088-m02
	I0729 01:05:35.894566   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:35.894575   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:35.894580   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:35.898260   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:36.094217   27502 request.go:629] Waited for 195.390243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:36.094272   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:36.094276   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:36.094284   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:36.094288   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:36.098180   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:36.098826   27502 pod_ready.go:92] pod "kube-apiserver-ha-845088-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:36.098843   27502 pod_ready.go:81] duration metric: took 400.512447ms for pod "kube-apiserver-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:36.098853   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:36.293982   27502 request.go:629] Waited for 195.040587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088
	I0729 01:05:36.294048   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088
	I0729 01:05:36.294053   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:36.294060   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:36.294064   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:36.297270   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:36.494332   27502 request.go:629] Waited for 196.384953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:36.494403   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:36.494412   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:36.494420   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:36.494427   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:36.498075   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:36.498686   27502 pod_ready.go:92] pod "kube-controller-manager-ha-845088" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:36.498705   27502 pod_ready.go:81] duration metric: took 399.843879ms for pod "kube-controller-manager-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:36.498714   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:36.694113   27502 request.go:629] Waited for 195.327672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088-m02
	I0729 01:05:36.694179   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088-m02
	I0729 01:05:36.694186   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:36.694196   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:36.694203   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:36.698055   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:36.894263   27502 request.go:629] Waited for 195.357228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:36.894322   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:36.894328   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:36.894339   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:36.894346   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:36.897615   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:36.898256   27502 pod_ready.go:92] pod "kube-controller-manager-ha-845088-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:36.898274   27502 pod_ready.go:81] duration metric: took 399.5534ms for pod "kube-controller-manager-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:36.898284   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j6gxl" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:37.094411   27502 request.go:629] Waited for 196.068776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6gxl
	I0729 01:05:37.094486   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6gxl
	I0729 01:05:37.094504   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:37.094516   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:37.094522   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:37.098653   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:05:37.293857   27502 request.go:629] Waited for 194.600226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:37.294013   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:37.294039   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:37.294052   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:37.294059   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:37.297322   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:37.297711   27502 pod_ready.go:92] pod "kube-proxy-j6gxl" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:37.297728   27502 pod_ready.go:81] duration metric: took 399.435925ms for pod "kube-proxy-j6gxl" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:37.297738   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tmzt7" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:37.493965   27502 request.go:629] Waited for 196.14423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmzt7
	I0729 01:05:37.494056   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmzt7
	I0729 01:05:37.494063   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:37.494073   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:37.494081   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:37.500693   27502 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 01:05:37.694584   27502 request.go:629] Waited for 192.391917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:37.694678   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:37.694689   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:37.694705   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:37.694717   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:37.698804   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:05:37.699305   27502 pod_ready.go:92] pod "kube-proxy-tmzt7" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:37.699325   27502 pod_ready.go:81] duration metric: took 401.579876ms for pod "kube-proxy-tmzt7" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:37.699334   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:37.894464   27502 request.go:629] Waited for 195.060285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088
	I0729 01:05:37.894528   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088
	I0729 01:05:37.894535   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:37.894548   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:37.894553   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:37.897748   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:38.093740   27502 request.go:629] Waited for 195.304241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:38.093810   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:05:38.093821   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:38.093833   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:38.093839   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:38.097181   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:38.097971   27502 pod_ready.go:92] pod "kube-scheduler-ha-845088" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:38.097989   27502 pod_ready.go:81] duration metric: took 398.647856ms for pod "kube-scheduler-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:38.097999   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:38.294178   27502 request.go:629] Waited for 196.110447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088-m02
	I0729 01:05:38.294252   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088-m02
	I0729 01:05:38.294259   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:38.294269   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:38.294278   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:38.297823   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:38.493896   27502 request.go:629] Waited for 195.394372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:38.493952   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:05:38.493959   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:38.493966   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:38.493973   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:38.496904   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:05:38.497673   27502 pod_ready.go:92] pod "kube-scheduler-ha-845088-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 01:05:38.497689   27502 pod_ready.go:81] duration metric: took 399.683512ms for pod "kube-scheduler-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:05:38.497699   27502 pod_ready.go:38] duration metric: took 3.199579282s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 01:05:38.497716   27502 api_server.go:52] waiting for apiserver process to appear ...
	I0729 01:05:38.497765   27502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:05:38.514296   27502 api_server.go:72] duration metric: took 22.017368118s to wait for apiserver process to appear ...
	I0729 01:05:38.514316   27502 api_server.go:88] waiting for apiserver healthz status ...
	I0729 01:05:38.514331   27502 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I0729 01:05:38.518520   27502 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I0729 01:05:38.518582   27502 round_trippers.go:463] GET https://192.168.39.69:8443/version
	I0729 01:05:38.518591   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:38.518601   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:38.518611   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:38.519521   27502 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 01:05:38.519618   27502 api_server.go:141] control plane version: v1.30.3
	I0729 01:05:38.519634   27502 api_server.go:131] duration metric: took 5.313497ms to wait for apiserver health ...
	I0729 01:05:38.519642   27502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 01:05:38.694128   27502 request.go:629] Waited for 174.41718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:05:38.694189   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:05:38.694196   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:38.694204   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:38.694211   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:38.699228   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:05:38.704674   27502 system_pods.go:59] 17 kube-system pods found
	I0729 01:05:38.704697   27502 system_pods.go:61] "coredns-7db6d8ff4d-26phs" [0fa00166-935c-4e30-899d-0ae105083984] Running
	I0729 01:05:38.704703   27502 system_pods.go:61] "coredns-7db6d8ff4d-x4jjj" [659a9fc3-a597-401d-9ceb-71a04f049d8c] Running
	I0729 01:05:38.704706   27502 system_pods.go:61] "etcd-ha-845088" [eb889e81-3ece-4af1-8bce-9c3740e8209c] Running
	I0729 01:05:38.704710   27502 system_pods.go:61] "etcd-ha-845088-m02" [e1bd96c5-3618-4f17-aa55-4a0c227cb401] Running
	I0729 01:05:38.704714   27502 system_pods.go:61] "kindnet-jz7gr" [3d184fd2-5bfc-40bd-b7b3-98934d58a689] Running
	I0729 01:05:38.704717   27502 system_pods.go:61] "kindnet-p87gx" [07b16da9-2b6f-45b8-b9a4-0009e6d60925] Running
	I0729 01:05:38.704723   27502 system_pods.go:61] "kube-apiserver-ha-845088" [1fe50c6b-6497-498e-8f2a-c84c3dabdbb3] Running
	I0729 01:05:38.704726   27502 system_pods.go:61] "kube-apiserver-ha-845088-m02" [d7fef5ee-2f47-4b3b-b625-f146578f3164] Running
	I0729 01:05:38.704730   27502 system_pods.go:61] "kube-controller-manager-ha-845088" [e58772fb-6dcd-431c-ba7b-cf726504c97e] Running
	I0729 01:05:38.704733   27502 system_pods.go:61] "kube-controller-manager-ha-845088-m02" [e8811503-c081-430f-9191-e1cf1fa1a866] Running
	I0729 01:05:38.704736   27502 system_pods.go:61] "kube-proxy-j6gxl" [45f77cb8-2b41-4069-8468-6defe7e0f51e] Running
	I0729 01:05:38.704740   27502 system_pods.go:61] "kube-proxy-tmzt7" [f2e92bb0-87c0-4d4e-ae34-d67970a61dc9] Running
	I0729 01:05:38.704744   27502 system_pods.go:61] "kube-scheduler-ha-845088" [8dd2df88-eb98-4220-a7f5-fe78bd302573] Running
	I0729 01:05:38.704747   27502 system_pods.go:61] "kube-scheduler-ha-845088-m02" [ca68c56a-ffbe-43be-b452-bd6bd7c508ba] Running
	I0729 01:05:38.704749   27502 system_pods.go:61] "kube-vip-ha-845088" [23429e30-003b-4bf2-9ab0-fb4d2a2ee5c8] Running
	I0729 01:05:38.704752   27502 system_pods.go:61] "kube-vip-ha-845088-m02" [4716aa15-53c6-4f56-98a4-1b0697bb355d] Running
	I0729 01:05:38.704755   27502 system_pods.go:61] "storage-provisioner" [9b770bc2-7368-4b86-89ff-399d60f17817] Running
	I0729 01:05:38.704761   27502 system_pods.go:74] duration metric: took 185.111935ms to wait for pod list to return data ...
	I0729 01:05:38.704769   27502 default_sa.go:34] waiting for default service account to be created ...
	I0729 01:05:38.894055   27502 request.go:629] Waited for 189.221463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I0729 01:05:38.894118   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I0729 01:05:38.894125   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:38.894134   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:38.894143   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:38.897226   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:38.897414   27502 default_sa.go:45] found service account: "default"
	I0729 01:05:38.897428   27502 default_sa.go:55] duration metric: took 192.65029ms for default service account to be created ...
	I0729 01:05:38.897435   27502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 01:05:39.093698   27502 request.go:629] Waited for 196.210309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:05:39.093764   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:05:39.093771   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:39.093780   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:39.093789   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:39.099136   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:05:39.103860   27502 system_pods.go:86] 17 kube-system pods found
	I0729 01:05:39.103883   27502 system_pods.go:89] "coredns-7db6d8ff4d-26phs" [0fa00166-935c-4e30-899d-0ae105083984] Running
	I0729 01:05:39.103887   27502 system_pods.go:89] "coredns-7db6d8ff4d-x4jjj" [659a9fc3-a597-401d-9ceb-71a04f049d8c] Running
	I0729 01:05:39.103891   27502 system_pods.go:89] "etcd-ha-845088" [eb889e81-3ece-4af1-8bce-9c3740e8209c] Running
	I0729 01:05:39.103895   27502 system_pods.go:89] "etcd-ha-845088-m02" [e1bd96c5-3618-4f17-aa55-4a0c227cb401] Running
	I0729 01:05:39.103899   27502 system_pods.go:89] "kindnet-jz7gr" [3d184fd2-5bfc-40bd-b7b3-98934d58a689] Running
	I0729 01:05:39.103903   27502 system_pods.go:89] "kindnet-p87gx" [07b16da9-2b6f-45b8-b9a4-0009e6d60925] Running
	I0729 01:05:39.103906   27502 system_pods.go:89] "kube-apiserver-ha-845088" [1fe50c6b-6497-498e-8f2a-c84c3dabdbb3] Running
	I0729 01:05:39.103911   27502 system_pods.go:89] "kube-apiserver-ha-845088-m02" [d7fef5ee-2f47-4b3b-b625-f146578f3164] Running
	I0729 01:05:39.103915   27502 system_pods.go:89] "kube-controller-manager-ha-845088" [e58772fb-6dcd-431c-ba7b-cf726504c97e] Running
	I0729 01:05:39.103919   27502 system_pods.go:89] "kube-controller-manager-ha-845088-m02" [e8811503-c081-430f-9191-e1cf1fa1a866] Running
	I0729 01:05:39.103923   27502 system_pods.go:89] "kube-proxy-j6gxl" [45f77cb8-2b41-4069-8468-6defe7e0f51e] Running
	I0729 01:05:39.103929   27502 system_pods.go:89] "kube-proxy-tmzt7" [f2e92bb0-87c0-4d4e-ae34-d67970a61dc9] Running
	I0729 01:05:39.103933   27502 system_pods.go:89] "kube-scheduler-ha-845088" [8dd2df88-eb98-4220-a7f5-fe78bd302573] Running
	I0729 01:05:39.103936   27502 system_pods.go:89] "kube-scheduler-ha-845088-m02" [ca68c56a-ffbe-43be-b452-bd6bd7c508ba] Running
	I0729 01:05:39.103939   27502 system_pods.go:89] "kube-vip-ha-845088" [23429e30-003b-4bf2-9ab0-fb4d2a2ee5c8] Running
	I0729 01:05:39.103943   27502 system_pods.go:89] "kube-vip-ha-845088-m02" [4716aa15-53c6-4f56-98a4-1b0697bb355d] Running
	I0729 01:05:39.103947   27502 system_pods.go:89] "storage-provisioner" [9b770bc2-7368-4b86-89ff-399d60f17817] Running
	I0729 01:05:39.103954   27502 system_pods.go:126] duration metric: took 206.514725ms to wait for k8s-apps to be running ...
	I0729 01:05:39.103961   27502 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 01:05:39.104003   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:05:39.119401   27502 system_svc.go:56] duration metric: took 15.427514ms WaitForService to wait for kubelet
	I0729 01:05:39.119424   27502 kubeadm.go:582] duration metric: took 22.62250259s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 01:05:39.119441   27502 node_conditions.go:102] verifying NodePressure condition ...
	I0729 01:05:39.293824   27502 request.go:629] Waited for 174.318053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes
	I0729 01:05:39.293905   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes
	I0729 01:05:39.293913   27502 round_trippers.go:469] Request Headers:
	I0729 01:05:39.293924   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:05:39.293935   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:05:39.297453   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:05:39.298167   27502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 01:05:39.298186   27502 node_conditions.go:123] node cpu capacity is 2
	I0729 01:05:39.298195   27502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 01:05:39.298199   27502 node_conditions.go:123] node cpu capacity is 2
	I0729 01:05:39.298203   27502 node_conditions.go:105] duration metric: took 178.757884ms to run NodePressure ...
	I0729 01:05:39.298213   27502 start.go:241] waiting for startup goroutines ...
	I0729 01:05:39.298234   27502 start.go:255] writing updated cluster config ...
	I0729 01:05:39.300330   27502 out.go:177] 
	I0729 01:05:39.301837   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:05:39.301939   27502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:05:39.303782   27502 out.go:177] * Starting "ha-845088-m03" control-plane node in "ha-845088" cluster
	I0729 01:05:39.305172   27502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:05:39.305192   27502 cache.go:56] Caching tarball of preloaded images
	I0729 01:05:39.305285   27502 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 01:05:39.305295   27502 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 01:05:39.305374   27502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:05:39.305518   27502 start.go:360] acquireMachinesLock for ha-845088-m03: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 01:05:39.305556   27502 start.go:364] duration metric: took 20.255µs to acquireMachinesLock for "ha-845088-m03"
	I0729 01:05:39.305574   27502 start.go:93] Provisioning new machine with config: &{Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:05:39.305660   27502 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0729 01:05:39.307190   27502 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 01:05:39.307257   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:05:39.307287   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:05:39.324326   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39675
	I0729 01:05:39.324740   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:05:39.325176   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:05:39.325195   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:05:39.325498   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:05:39.325670   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetMachineName
	I0729 01:05:39.325810   27502 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:05:39.325964   27502 start.go:159] libmachine.API.Create for "ha-845088" (driver="kvm2")
	I0729 01:05:39.325991   27502 client.go:168] LocalClient.Create starting
	I0729 01:05:39.326025   27502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem
	I0729 01:05:39.326065   27502 main.go:141] libmachine: Decoding PEM data...
	I0729 01:05:39.326083   27502 main.go:141] libmachine: Parsing certificate...
	I0729 01:05:39.326149   27502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem
	I0729 01:05:39.326176   27502 main.go:141] libmachine: Decoding PEM data...
	I0729 01:05:39.326192   27502 main.go:141] libmachine: Parsing certificate...
	I0729 01:05:39.326218   27502 main.go:141] libmachine: Running pre-create checks...
	I0729 01:05:39.326230   27502 main.go:141] libmachine: (ha-845088-m03) Calling .PreCreateCheck
	I0729 01:05:39.326386   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetConfigRaw
	I0729 01:05:39.326737   27502 main.go:141] libmachine: Creating machine...
	I0729 01:05:39.326750   27502 main.go:141] libmachine: (ha-845088-m03) Calling .Create
	I0729 01:05:39.326876   27502 main.go:141] libmachine: (ha-845088-m03) Creating KVM machine...
	I0729 01:05:39.328256   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found existing default KVM network
	I0729 01:05:39.328414   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found existing private KVM network mk-ha-845088
	I0729 01:05:39.328543   27502 main.go:141] libmachine: (ha-845088-m03) Setting up store path in /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03 ...
	I0729 01:05:39.328569   27502 main.go:141] libmachine: (ha-845088-m03) Building disk image from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 01:05:39.328641   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:39.328542   28354 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:05:39.328790   27502 main.go:141] libmachine: (ha-845088-m03) Downloading /home/jenkins/minikube-integration/19312-9421/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 01:05:39.581441   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:39.581321   28354 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa...
	I0729 01:05:39.873658   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:39.873558   28354 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/ha-845088-m03.rawdisk...
	I0729 01:05:39.873687   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Writing magic tar header
	I0729 01:05:39.873702   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Writing SSH key tar header
	I0729 01:05:39.873712   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:39.873660   28354 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03 ...
	I0729 01:05:39.873826   27502 main.go:141] libmachine: (ha-845088-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03 (perms=drwx------)
	I0729 01:05:39.873857   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03
	I0729 01:05:39.873865   27502 main.go:141] libmachine: (ha-845088-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines (perms=drwxr-xr-x)
	I0729 01:05:39.873873   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines
	I0729 01:05:39.873880   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:05:39.873889   27502 main.go:141] libmachine: (ha-845088-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube (perms=drwxr-xr-x)
	I0729 01:05:39.873897   27502 main.go:141] libmachine: (ha-845088-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421 (perms=drwxrwxr-x)
	I0729 01:05:39.873905   27502 main.go:141] libmachine: (ha-845088-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 01:05:39.873912   27502 main.go:141] libmachine: (ha-845088-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 01:05:39.873918   27502 main.go:141] libmachine: (ha-845088-m03) Creating domain...
	I0729 01:05:39.873924   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421
	I0729 01:05:39.873944   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 01:05:39.873965   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Checking permissions on dir: /home/jenkins
	I0729 01:05:39.873978   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Checking permissions on dir: /home
	I0729 01:05:39.873986   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Skipping /home - not owner
	I0729 01:05:39.874850   27502 main.go:141] libmachine: (ha-845088-m03) define libvirt domain using xml: 
	I0729 01:05:39.874872   27502 main.go:141] libmachine: (ha-845088-m03) <domain type='kvm'>
	I0729 01:05:39.874882   27502 main.go:141] libmachine: (ha-845088-m03)   <name>ha-845088-m03</name>
	I0729 01:05:39.874893   27502 main.go:141] libmachine: (ha-845088-m03)   <memory unit='MiB'>2200</memory>
	I0729 01:05:39.874905   27502 main.go:141] libmachine: (ha-845088-m03)   <vcpu>2</vcpu>
	I0729 01:05:39.874916   27502 main.go:141] libmachine: (ha-845088-m03)   <features>
	I0729 01:05:39.874925   27502 main.go:141] libmachine: (ha-845088-m03)     <acpi/>
	I0729 01:05:39.874934   27502 main.go:141] libmachine: (ha-845088-m03)     <apic/>
	I0729 01:05:39.874943   27502 main.go:141] libmachine: (ha-845088-m03)     <pae/>
	I0729 01:05:39.874949   27502 main.go:141] libmachine: (ha-845088-m03)     
	I0729 01:05:39.874954   27502 main.go:141] libmachine: (ha-845088-m03)   </features>
	I0729 01:05:39.874959   27502 main.go:141] libmachine: (ha-845088-m03)   <cpu mode='host-passthrough'>
	I0729 01:05:39.874964   27502 main.go:141] libmachine: (ha-845088-m03)   
	I0729 01:05:39.874974   27502 main.go:141] libmachine: (ha-845088-m03)   </cpu>
	I0729 01:05:39.874980   27502 main.go:141] libmachine: (ha-845088-m03)   <os>
	I0729 01:05:39.874989   27502 main.go:141] libmachine: (ha-845088-m03)     <type>hvm</type>
	I0729 01:05:39.875018   27502 main.go:141] libmachine: (ha-845088-m03)     <boot dev='cdrom'/>
	I0729 01:05:39.875037   27502 main.go:141] libmachine: (ha-845088-m03)     <boot dev='hd'/>
	I0729 01:05:39.875051   27502 main.go:141] libmachine: (ha-845088-m03)     <bootmenu enable='no'/>
	I0729 01:05:39.875070   27502 main.go:141] libmachine: (ha-845088-m03)   </os>
	I0729 01:05:39.875081   27502 main.go:141] libmachine: (ha-845088-m03)   <devices>
	I0729 01:05:39.875096   27502 main.go:141] libmachine: (ha-845088-m03)     <disk type='file' device='cdrom'>
	I0729 01:05:39.875115   27502 main.go:141] libmachine: (ha-845088-m03)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/boot2docker.iso'/>
	I0729 01:05:39.875128   27502 main.go:141] libmachine: (ha-845088-m03)       <target dev='hdc' bus='scsi'/>
	I0729 01:05:39.875138   27502 main.go:141] libmachine: (ha-845088-m03)       <readonly/>
	I0729 01:05:39.875148   27502 main.go:141] libmachine: (ha-845088-m03)     </disk>
	I0729 01:05:39.875159   27502 main.go:141] libmachine: (ha-845088-m03)     <disk type='file' device='disk'>
	I0729 01:05:39.875172   27502 main.go:141] libmachine: (ha-845088-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 01:05:39.875196   27502 main.go:141] libmachine: (ha-845088-m03)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/ha-845088-m03.rawdisk'/>
	I0729 01:05:39.875213   27502 main.go:141] libmachine: (ha-845088-m03)       <target dev='hda' bus='virtio'/>
	I0729 01:05:39.875224   27502 main.go:141] libmachine: (ha-845088-m03)     </disk>
	I0729 01:05:39.875234   27502 main.go:141] libmachine: (ha-845088-m03)     <interface type='network'>
	I0729 01:05:39.875246   27502 main.go:141] libmachine: (ha-845088-m03)       <source network='mk-ha-845088'/>
	I0729 01:05:39.875255   27502 main.go:141] libmachine: (ha-845088-m03)       <model type='virtio'/>
	I0729 01:05:39.875262   27502 main.go:141] libmachine: (ha-845088-m03)     </interface>
	I0729 01:05:39.875269   27502 main.go:141] libmachine: (ha-845088-m03)     <interface type='network'>
	I0729 01:05:39.875275   27502 main.go:141] libmachine: (ha-845088-m03)       <source network='default'/>
	I0729 01:05:39.875282   27502 main.go:141] libmachine: (ha-845088-m03)       <model type='virtio'/>
	I0729 01:05:39.875297   27502 main.go:141] libmachine: (ha-845088-m03)     </interface>
	I0729 01:05:39.875313   27502 main.go:141] libmachine: (ha-845088-m03)     <serial type='pty'>
	I0729 01:05:39.875326   27502 main.go:141] libmachine: (ha-845088-m03)       <target port='0'/>
	I0729 01:05:39.875337   27502 main.go:141] libmachine: (ha-845088-m03)     </serial>
	I0729 01:05:39.875350   27502 main.go:141] libmachine: (ha-845088-m03)     <console type='pty'>
	I0729 01:05:39.875361   27502 main.go:141] libmachine: (ha-845088-m03)       <target type='serial' port='0'/>
	I0729 01:05:39.875373   27502 main.go:141] libmachine: (ha-845088-m03)     </console>
	I0729 01:05:39.875387   27502 main.go:141] libmachine: (ha-845088-m03)     <rng model='virtio'>
	I0729 01:05:39.875402   27502 main.go:141] libmachine: (ha-845088-m03)       <backend model='random'>/dev/random</backend>
	I0729 01:05:39.875410   27502 main.go:141] libmachine: (ha-845088-m03)     </rng>
	I0729 01:05:39.875439   27502 main.go:141] libmachine: (ha-845088-m03)     
	I0729 01:05:39.875448   27502 main.go:141] libmachine: (ha-845088-m03)     
	I0729 01:05:39.875491   27502 main.go:141] libmachine: (ha-845088-m03)   </devices>
	I0729 01:05:39.875514   27502 main.go:141] libmachine: (ha-845088-m03) </domain>
	I0729 01:05:39.875526   27502 main.go:141] libmachine: (ha-845088-m03) 
	I0729 01:05:39.882005   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:53:46:2d in network default
	I0729 01:05:39.882531   27502 main.go:141] libmachine: (ha-845088-m03) Ensuring networks are active...
	I0729 01:05:39.882565   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:39.883346   27502 main.go:141] libmachine: (ha-845088-m03) Ensuring network default is active
	I0729 01:05:39.883713   27502 main.go:141] libmachine: (ha-845088-m03) Ensuring network mk-ha-845088 is active
	I0729 01:05:39.884078   27502 main.go:141] libmachine: (ha-845088-m03) Getting domain xml...
	I0729 01:05:39.884758   27502 main.go:141] libmachine: (ha-845088-m03) Creating domain...
	I0729 01:05:41.107959   27502 main.go:141] libmachine: (ha-845088-m03) Waiting to get IP...
	I0729 01:05:41.108667   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:41.109143   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:41.109163   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:41.109114   28354 retry.go:31] will retry after 214.34753ms: waiting for machine to come up
	I0729 01:05:41.325647   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:41.326155   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:41.326184   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:41.326106   28354 retry.go:31] will retry after 375.969123ms: waiting for machine to come up
	I0729 01:05:41.703622   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:41.704053   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:41.704078   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:41.704023   28354 retry.go:31] will retry after 475.943307ms: waiting for machine to come up
	I0729 01:05:42.181142   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:42.181586   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:42.181632   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:42.181563   28354 retry.go:31] will retry after 559.597658ms: waiting for machine to come up
	I0729 01:05:42.742209   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:42.742637   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:42.742667   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:42.742571   28354 retry.go:31] will retry after 635.877296ms: waiting for machine to come up
	I0729 01:05:43.380286   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:43.380759   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:43.380786   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:43.380705   28354 retry.go:31] will retry after 895.342626ms: waiting for machine to come up
	I0729 01:05:44.277705   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:44.278180   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:44.278210   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:44.278127   28354 retry.go:31] will retry after 868.037692ms: waiting for machine to come up
	I0729 01:05:45.148047   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:45.148487   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:45.148517   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:45.148461   28354 retry.go:31] will retry after 998.649569ms: waiting for machine to come up
	I0729 01:05:46.149225   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:46.149646   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:46.149673   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:46.149587   28354 retry.go:31] will retry after 1.731737854s: waiting for machine to come up
	I0729 01:05:47.883017   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:47.883474   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:47.883511   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:47.883405   28354 retry.go:31] will retry after 2.192020926s: waiting for machine to come up
	I0729 01:05:50.077934   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:50.078526   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:50.078555   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:50.078479   28354 retry.go:31] will retry after 2.583552543s: waiting for machine to come up
	I0729 01:05:52.665052   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:52.665437   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:52.665463   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:52.665420   28354 retry.go:31] will retry after 2.260400072s: waiting for machine to come up
	I0729 01:05:54.927407   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:54.927812   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:54.927841   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:54.927768   28354 retry.go:31] will retry after 4.178032033s: waiting for machine to come up
	I0729 01:05:59.110167   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:05:59.110627   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find current IP address of domain ha-845088-m03 in network mk-ha-845088
	I0729 01:05:59.110658   27502 main.go:141] libmachine: (ha-845088-m03) DBG | I0729 01:05:59.110531   28354 retry.go:31] will retry after 4.108724133s: waiting for machine to come up
	I0729 01:06:03.223090   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.223468   27502 main.go:141] libmachine: (ha-845088-m03) Found IP for machine: 192.168.39.243
	I0729 01:06:03.223498   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has current primary IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.223507   27502 main.go:141] libmachine: (ha-845088-m03) Reserving static IP address...
	I0729 01:06:03.223942   27502 main.go:141] libmachine: (ha-845088-m03) DBG | unable to find host DHCP lease matching {name: "ha-845088-m03", mac: "52:54:00:67:6a:ee", ip: "192.168.39.243"} in network mk-ha-845088
	I0729 01:06:03.303198   27502 main.go:141] libmachine: (ha-845088-m03) Reserved static IP address: 192.168.39.243
	I0729 01:06:03.303229   27502 main.go:141] libmachine: (ha-845088-m03) Waiting for SSH to be available...
	I0729 01:06:03.303240   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Getting to WaitForSSH function...
	I0729 01:06:03.306121   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.306568   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:minikube Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:03.306596   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.306694   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Using SSH client type: external
	I0729 01:06:03.306715   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa (-rw-------)
	I0729 01:06:03.306742   27502 main.go:141] libmachine: (ha-845088-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 01:06:03.306754   27502 main.go:141] libmachine: (ha-845088-m03) DBG | About to run SSH command:
	I0729 01:06:03.306772   27502 main.go:141] libmachine: (ha-845088-m03) DBG | exit 0
	I0729 01:06:03.435151   27502 main.go:141] libmachine: (ha-845088-m03) DBG | SSH cmd err, output: <nil>: 
	I0729 01:06:03.435395   27502 main.go:141] libmachine: (ha-845088-m03) KVM machine creation complete!
	I0729 01:06:03.435741   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetConfigRaw
	I0729 01:06:03.436328   27502 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:06:03.436538   27502 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:06:03.436694   27502 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 01:06:03.436711   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetState
	I0729 01:06:03.438008   27502 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 01:06:03.438025   27502 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 01:06:03.438030   27502 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 01:06:03.438036   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:03.440559   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.440962   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:03.440991   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.441177   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:03.441362   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:03.441505   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:03.441610   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:03.441746   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:06:03.441948   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0729 01:06:03.441960   27502 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 01:06:03.558514   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:06:03.558543   27502 main.go:141] libmachine: Detecting the provisioner...
	I0729 01:06:03.558553   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:03.561702   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.562184   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:03.562211   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.562407   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:03.562578   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:03.562747   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:03.562892   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:03.563120   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:06:03.563323   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0729 01:06:03.563336   27502 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 01:06:03.680201   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 01:06:03.680260   27502 main.go:141] libmachine: found compatible host: buildroot
	I0729 01:06:03.680273   27502 main.go:141] libmachine: Provisioning with buildroot...
	I0729 01:06:03.680290   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetMachineName
	I0729 01:06:03.680527   27502 buildroot.go:166] provisioning hostname "ha-845088-m03"
	I0729 01:06:03.680558   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetMachineName
	I0729 01:06:03.680778   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:03.683683   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.684076   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:03.684104   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.684241   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:03.684423   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:03.684588   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:03.684716   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:03.684888   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:06:03.685083   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0729 01:06:03.685095   27502 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-845088-m03 && echo "ha-845088-m03" | sudo tee /etc/hostname
	I0729 01:06:03.811703   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-845088-m03
	
	I0729 01:06:03.811736   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:03.814632   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.815049   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:03.815093   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.815309   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:03.815501   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:03.815669   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:03.815820   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:03.815959   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:06:03.816118   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0729 01:06:03.816133   27502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-845088-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-845088-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-845088-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 01:06:03.938958   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:06:03.938986   27502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 01:06:03.939012   27502 buildroot.go:174] setting up certificates
	I0729 01:06:03.939025   27502 provision.go:84] configureAuth start
	I0729 01:06:03.939045   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetMachineName
	I0729 01:06:03.939363   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetIP
	I0729 01:06:03.942159   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.942561   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:03.942598   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.942760   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:03.945067   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.945393   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:03.945418   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:03.945599   27502 provision.go:143] copyHostCerts
	I0729 01:06:03.945629   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:06:03.945665   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 01:06:03.945677   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:06:03.945758   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 01:06:03.945860   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:06:03.945884   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 01:06:03.945892   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:06:03.945931   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 01:06:03.945993   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:06:03.946015   27502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 01:06:03.946025   27502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:06:03.946057   27502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 01:06:03.946129   27502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.ha-845088-m03 san=[127.0.0.1 192.168.39.243 ha-845088-m03 localhost minikube]
	I0729 01:06:04.366831   27502 provision.go:177] copyRemoteCerts
	I0729 01:06:04.366890   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 01:06:04.366912   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:04.369754   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.370177   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:04.370208   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.370466   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:04.370716   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:04.370876   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:04.371026   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:06:04.462183   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 01:06:04.462291   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 01:06:04.487519   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 01:06:04.487584   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 01:06:04.513367   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 01:06:04.513425   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 01:06:04.539048   27502 provision.go:87] duration metric: took 600.004482ms to configureAuth
	I0729 01:06:04.539110   27502 buildroot.go:189] setting minikube options for container-runtime
	I0729 01:06:04.539302   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:06:04.539366   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:04.542002   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.542446   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:04.542473   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.542642   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:04.542924   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:04.543083   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:04.543199   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:04.543379   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:06:04.543535   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0729 01:06:04.543550   27502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 01:06:04.811026   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 01:06:04.811050   27502 main.go:141] libmachine: Checking connection to Docker...
	I0729 01:06:04.811079   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetURL
	I0729 01:06:04.812295   27502 main.go:141] libmachine: (ha-845088-m03) DBG | Using libvirt version 6000000
	I0729 01:06:04.814723   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.815180   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:04.815225   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.815360   27502 main.go:141] libmachine: Docker is up and running!
	I0729 01:06:04.815375   27502 main.go:141] libmachine: Reticulating splines...
	I0729 01:06:04.815382   27502 client.go:171] duration metric: took 25.489382959s to LocalClient.Create
	I0729 01:06:04.815403   27502 start.go:167] duration metric: took 25.48943964s to libmachine.API.Create "ha-845088"
	I0729 01:06:04.815411   27502 start.go:293] postStartSetup for "ha-845088-m03" (driver="kvm2")
	I0729 01:06:04.815420   27502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 01:06:04.815436   27502 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:06:04.815632   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 01:06:04.815655   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:04.818038   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.818468   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:04.818499   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.818610   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:04.818793   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:04.818961   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:04.819114   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:06:04.906380   27502 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 01:06:04.911051   27502 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 01:06:04.911098   27502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 01:06:04.911172   27502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 01:06:04.911266   27502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 01:06:04.911279   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /etc/ssl/certs/166232.pem
	I0729 01:06:04.911382   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 01:06:04.920907   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:06:04.950815   27502 start.go:296] duration metric: took 135.390141ms for postStartSetup
	I0729 01:06:04.950873   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetConfigRaw
	I0729 01:06:04.951586   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetIP
	I0729 01:06:04.954390   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.954798   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:04.954830   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.955091   27502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:06:04.955286   27502 start.go:128] duration metric: took 25.649616647s to createHost
	I0729 01:06:04.955319   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:04.957627   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.957948   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:04.957978   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:04.958093   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:04.958275   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:04.958437   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:04.958580   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:04.958735   27502 main.go:141] libmachine: Using SSH client type: native
	I0729 01:06:04.958894   27502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0729 01:06:04.958903   27502 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 01:06:05.072345   27502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722215165.049350486
	
	I0729 01:06:05.072371   27502 fix.go:216] guest clock: 1722215165.049350486
	I0729 01:06:05.072378   27502 fix.go:229] Guest: 2024-07-29 01:06:05.049350486 +0000 UTC Remote: 2024-07-29 01:06:04.955297652 +0000 UTC m=+172.871587953 (delta=94.052834ms)
	I0729 01:06:05.072394   27502 fix.go:200] guest clock delta is within tolerance: 94.052834ms
	I0729 01:06:05.072399   27502 start.go:83] releasing machines lock for "ha-845088-m03", held for 25.766834934s
	I0729 01:06:05.072417   27502 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:06:05.072665   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetIP
	I0729 01:06:05.075534   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:05.075917   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:05.075934   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:05.078209   27502 out.go:177] * Found network options:
	I0729 01:06:05.079571   27502 out.go:177]   - NO_PROXY=192.168.39.69,192.168.39.68
	W0729 01:06:05.080720   27502 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 01:06:05.080742   27502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 01:06:05.080755   27502 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:06:05.081231   27502 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:06:05.081408   27502 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:06:05.081498   27502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 01:06:05.081537   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	W0729 01:06:05.081597   27502 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 01:06:05.081617   27502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 01:06:05.081670   27502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 01:06:05.081688   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:06:05.084428   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:05.084592   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:05.084850   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:05.084875   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:05.085018   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:05.085171   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:05.085187   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:05.085198   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:05.085339   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:06:05.085402   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:05.085513   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:06:05.085588   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:06:05.085661   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:06:05.085839   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:06:05.319094   27502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 01:06:05.325900   27502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 01:06:05.325961   27502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 01:06:05.343733   27502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 01:06:05.343759   27502 start.go:495] detecting cgroup driver to use...
	I0729 01:06:05.343833   27502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 01:06:05.361972   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 01:06:05.376158   27502 docker.go:217] disabling cri-docker service (if available) ...
	I0729 01:06:05.376212   27502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 01:06:05.390149   27502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 01:06:05.404220   27502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 01:06:05.530056   27502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 01:06:05.668459   27502 docker.go:233] disabling docker service ...
	I0729 01:06:05.668541   27502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 01:06:05.685042   27502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 01:06:05.698627   27502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 01:06:05.833352   27502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 01:06:05.948485   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 01:06:05.967279   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 01:06:05.990173   27502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 01:06:05.990244   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:06:06.002326   27502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 01:06:06.002385   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:06:06.013743   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:06:06.025442   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:06:06.036718   27502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 01:06:06.048527   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:06:06.060094   27502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:06:06.079343   27502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:06:06.090601   27502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 01:06:06.100699   27502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 01:06:06.100778   27502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 01:06:06.114830   27502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 01:06:06.124586   27502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:06:06.246180   27502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 01:06:06.386523   27502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 01:06:06.386595   27502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 01:06:06.391480   27502 start.go:563] Will wait 60s for crictl version
	I0729 01:06:06.391535   27502 ssh_runner.go:195] Run: which crictl
	I0729 01:06:06.395224   27502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 01:06:06.448077   27502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 01:06:06.448174   27502 ssh_runner.go:195] Run: crio --version
	I0729 01:06:06.477971   27502 ssh_runner.go:195] Run: crio --version
	I0729 01:06:06.509233   27502 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 01:06:06.510624   27502 out.go:177]   - env NO_PROXY=192.168.39.69
	I0729 01:06:06.512009   27502 out.go:177]   - env NO_PROXY=192.168.39.69,192.168.39.68
	I0729 01:06:06.513160   27502 main.go:141] libmachine: (ha-845088-m03) Calling .GetIP
	I0729 01:06:06.515805   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:06.516145   27502 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:06:06.516176   27502 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:06:06.516327   27502 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 01:06:06.520609   27502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 01:06:06.533819   27502 mustload.go:65] Loading cluster: ha-845088
	I0729 01:06:06.534071   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:06:06.534419   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:06:06.534463   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:06:06.549210   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I0729 01:06:06.549644   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:06:06.550076   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:06:06.550093   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:06:06.550396   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:06:06.550591   27502 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:06:06.552250   27502 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:06:06.552528   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:06:06.552566   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:06:06.567532   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40023
	I0729 01:06:06.567966   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:06:06.568449   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:06:06.568470   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:06:06.568779   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:06:06.569014   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:06:06.569152   27502 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088 for IP: 192.168.39.243
	I0729 01:06:06.569169   27502 certs.go:194] generating shared ca certs ...
	I0729 01:06:06.569188   27502 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:06:06.569313   27502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 01:06:06.569349   27502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 01:06:06.569358   27502 certs.go:256] generating profile certs ...
	I0729 01:06:06.569434   27502 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key
	I0729 01:06:06.569473   27502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.1682affb
	I0729 01:06:06.569495   27502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.1682affb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.68 192.168.39.243 192.168.39.254]
	I0729 01:06:06.802077   27502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.1682affb ...
	I0729 01:06:06.802115   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.1682affb: {Name:mkd50706cc4400eb4c34783cde4de9c621fa6155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:06:06.802298   27502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.1682affb ...
	I0729 01:06:06.802313   27502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.1682affb: {Name:mkb018f03dff67b92381e70e7a91ba8bfe22d1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:06:06.802403   27502 certs.go:381] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.1682affb -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt
	I0729 01:06:06.802548   27502 certs.go:385] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.1682affb -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key
	I0729 01:06:06.802696   27502 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key
	I0729 01:06:06.802713   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 01:06:06.802733   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 01:06:06.802752   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 01:06:06.802771   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 01:06:06.802789   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 01:06:06.802807   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 01:06:06.802824   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 01:06:06.802839   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 01:06:06.802908   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 01:06:06.802949   27502 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 01:06:06.802964   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 01:06:06.802997   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 01:06:06.803028   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 01:06:06.803077   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 01:06:06.803142   27502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:06:06.803180   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:06:06.803199   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem -> /usr/share/ca-certificates/16623.pem
	I0729 01:06:06.803217   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /usr/share/ca-certificates/166232.pem
	I0729 01:06:06.803255   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:06:06.806378   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:06:06.806789   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:06:06.806816   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:06:06.807046   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:06:06.807275   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:06:06.807443   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:06:06.807611   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:06:06.883465   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 01:06:06.888917   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 01:06:06.900921   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 01:06:06.906724   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0729 01:06:06.921455   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 01:06:06.927215   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 01:06:06.940525   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 01:06:06.946114   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 01:06:06.961058   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 01:06:06.965520   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 01:06:06.984525   27502 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 01:06:06.989032   27502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 01:06:07.001578   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 01:06:07.029642   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 01:06:07.054791   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 01:06:07.080274   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 01:06:07.104019   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0729 01:06:07.129132   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 01:06:07.154543   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 01:06:07.179757   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 01:06:07.204189   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 01:06:07.228019   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 01:06:07.253233   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 01:06:07.278740   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 01:06:07.296756   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0729 01:06:07.313630   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 01:06:07.331828   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 01:06:07.357423   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 01:06:07.380635   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 01:06:07.399096   27502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 01:06:07.416823   27502 ssh_runner.go:195] Run: openssl version
	I0729 01:06:07.422965   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 01:06:07.433712   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:06:07.438211   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:06:07.438265   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:06:07.443998   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 01:06:07.454684   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 01:06:07.465341   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 01:06:07.469898   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 01:06:07.469959   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 01:06:07.475841   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 01:06:07.486935   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 01:06:07.498173   27502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 01:06:07.503253   27502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 01:06:07.503303   27502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 01:06:07.509045   27502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 01:06:07.519571   27502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 01:06:07.523630   27502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 01:06:07.523687   27502 kubeadm.go:934] updating node {m03 192.168.39.243 8443 v1.30.3 crio true true} ...
	I0729 01:06:07.523788   27502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-845088-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 01:06:07.523822   27502 kube-vip.go:115] generating kube-vip config ...
	I0729 01:06:07.523867   27502 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 01:06:07.539345   27502 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 01:06:07.539415   27502 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 01:06:07.539479   27502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 01:06:07.549335   27502 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 01:06:07.549414   27502 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 01:06:07.559210   27502 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0729 01:06:07.559222   27502 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0729 01:06:07.559247   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 01:06:07.559263   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:06:07.559313   27502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 01:06:07.559210   27502 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 01:06:07.559383   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 01:06:07.559461   27502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 01:06:07.577126   27502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 01:06:07.577126   27502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 01:06:07.577203   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 01:06:07.577219   27502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 01:06:07.577203   27502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 01:06:07.577249   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 01:06:07.607613   27502 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 01:06:07.607657   27502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 01:06:08.493286   27502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 01:06:08.503399   27502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 01:06:08.521569   27502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 01:06:08.539657   27502 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 01:06:08.558633   27502 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 01:06:08.562915   27502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 01:06:08.576034   27502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:06:08.701810   27502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:06:08.717937   27502 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:06:08.718364   27502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:06:08.718413   27502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:06:08.734339   27502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32875
	I0729 01:06:08.734859   27502 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:06:08.735646   27502 main.go:141] libmachine: Using API Version  1
	I0729 01:06:08.735675   27502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:06:08.736037   27502 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:06:08.736235   27502 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:06:08.736386   27502 start.go:317] joinCluster: &{Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:06:08.736538   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 01:06:08.736559   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:06:08.739516   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:06:08.739926   27502 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:06:08.739943   27502 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:06:08.740174   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:06:08.740326   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:06:08.740496   27502 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:06:08.740619   27502 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:06:08.910172   27502 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:06:08.910223   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5zqsql.wm8sxofz5f2yakhi --discovery-token-ca-cert-hash sha256:2259b3e93c5dd9b5daf5a1af8e350826f214305256ac858c5baa518ad685cc90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-845088-m03 --control-plane --apiserver-advertise-address=192.168.39.243 --apiserver-bind-port=8443"
	I0729 01:06:31.990460   27502 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5zqsql.wm8sxofz5f2yakhi --discovery-token-ca-cert-hash sha256:2259b3e93c5dd9b5daf5a1af8e350826f214305256ac858c5baa518ad685cc90 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-845088-m03 --control-plane --apiserver-advertise-address=192.168.39.243 --apiserver-bind-port=8443": (23.080204816s)
	I0729 01:06:31.990493   27502 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 01:06:32.529477   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-845088-m03 minikube.k8s.io/updated_at=2024_07_29T01_06_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1 minikube.k8s.io/name=ha-845088 minikube.k8s.io/primary=false
	I0729 01:06:32.662980   27502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-845088-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 01:06:32.773580   27502 start.go:319] duration metric: took 24.037189575s to joinCluster
	I0729 01:06:32.773664   27502 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:06:32.774045   27502 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:06:32.775339   27502 out.go:177] * Verifying Kubernetes components...
	I0729 01:06:32.776777   27502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:06:33.069249   27502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:06:33.120420   27502 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:06:33.120748   27502 kapi.go:59] client config for ha-845088: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key", CAFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 01:06:33.120846   27502 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.69:8443
	I0729 01:06:33.121095   27502 node_ready.go:35] waiting up to 6m0s for node "ha-845088-m03" to be "Ready" ...
	I0729 01:06:33.121176   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:33.121184   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:33.121195   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:33.121203   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:33.126534   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:06:33.621507   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:33.621529   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:33.621538   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:33.621546   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:33.625753   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:34.121777   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:34.121800   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:34.121811   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:34.121816   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:34.125751   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:34.621330   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:34.621379   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:34.621395   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:34.621402   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:34.626049   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:35.122166   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:35.122191   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:35.122204   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:35.122209   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:35.127744   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:06:35.128219   27502 node_ready.go:53] node "ha-845088-m03" has status "Ready":"False"
	I0729 01:06:35.622134   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:35.622164   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:35.622177   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:35.622183   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:35.627500   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:06:36.122190   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:36.122212   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:36.122220   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:36.122223   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:36.126009   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:36.621975   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:36.622000   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:36.622011   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:36.622017   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:36.625659   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:37.121698   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:37.121724   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:37.121736   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:37.121743   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:37.125547   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:37.621948   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:37.621973   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:37.621985   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:37.621992   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:37.626169   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:37.626850   27502 node_ready.go:53] node "ha-845088-m03" has status "Ready":"False"
	I0729 01:06:38.121948   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:38.121978   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:38.121990   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:38.121996   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:38.125808   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:38.621364   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:38.621385   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:38.621392   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:38.621396   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:38.625132   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:39.122128   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:39.122149   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:39.122159   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:39.122166   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:39.129509   27502 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 01:06:39.622138   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:39.622164   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:39.622176   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:39.622182   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:39.626023   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:40.121882   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:40.121906   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:40.121914   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:40.121917   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:40.125872   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:40.126455   27502 node_ready.go:53] node "ha-845088-m03" has status "Ready":"False"
	I0729 01:06:40.622278   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:40.622300   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:40.622310   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:40.622316   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:40.626476   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:41.121458   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:41.121478   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:41.121487   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:41.121491   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:41.125099   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:41.622300   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:41.622334   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:41.622341   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:41.622363   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:41.625936   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:42.122089   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:42.122108   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:42.122115   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:42.122120   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:42.126042   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:42.126658   27502 node_ready.go:53] node "ha-845088-m03" has status "Ready":"False"
	I0729 01:06:42.622300   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:42.622326   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:42.622339   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:42.622344   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:42.625647   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:43.121872   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:43.121892   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:43.121909   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:43.121913   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:43.125764   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:43.621927   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:43.621947   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:43.621955   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:43.621960   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:43.627407   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:06:44.122215   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:44.122237   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:44.122243   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:44.122248   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:44.125791   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:44.621423   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:44.621444   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:44.621452   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:44.621456   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:44.624728   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:44.625383   27502 node_ready.go:53] node "ha-845088-m03" has status "Ready":"False"
	I0729 01:06:45.121792   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:45.121818   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:45.121828   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:45.121836   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:45.125439   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:45.621762   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:45.621786   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:45.621795   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:45.621800   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:45.625405   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:46.121569   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:46.121590   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:46.121598   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:46.121601   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:46.125233   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:46.621723   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:46.621743   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:46.621754   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:46.621760   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:46.625514   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:46.626605   27502 node_ready.go:53] node "ha-845088-m03" has status "Ready":"False"
	I0729 01:06:47.122029   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:47.122054   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:47.122065   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:47.122070   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:47.125507   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:47.621740   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:47.621788   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:47.621800   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:47.621807   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:47.625683   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:48.121996   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:48.122019   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:48.122026   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:48.122034   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:48.126309   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:48.621519   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:48.621542   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:48.621553   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:48.621557   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:48.625147   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:49.122033   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:49.122052   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:49.122059   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:49.122063   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:49.125561   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:49.126204   27502 node_ready.go:53] node "ha-845088-m03" has status "Ready":"False"
	I0729 01:06:49.622018   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:49.622038   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:49.622049   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:49.622062   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:49.625592   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:50.121595   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:50.121618   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.121639   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.121656   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.124936   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:50.622298   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:50.622319   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.622327   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.622332   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.627435   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:06:50.627995   27502 node_ready.go:49] node "ha-845088-m03" has status "Ready":"True"
	I0729 01:06:50.628015   27502 node_ready.go:38] duration metric: took 17.506903062s for node "ha-845088-m03" to be "Ready" ...
	I0729 01:06:50.628023   27502 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 01:06:50.628087   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:06:50.628100   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.628107   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.628113   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.637007   27502 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 01:06:50.644759   27502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-26phs" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.644863   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-26phs
	I0729 01:06:50.644873   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.644883   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.644892   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.648644   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:50.649244   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:50.649260   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.649271   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.649275   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.651972   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:06:50.652545   27502 pod_ready.go:92] pod "coredns-7db6d8ff4d-26phs" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:50.652564   27502 pod_ready.go:81] duration metric: took 7.779242ms for pod "coredns-7db6d8ff4d-26phs" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.652576   27502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x4jjj" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.652640   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x4jjj
	I0729 01:06:50.652648   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.652655   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.652660   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.655020   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:06:50.655717   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:50.655732   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.655741   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.655747   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.659322   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:50.659818   27502 pod_ready.go:92] pod "coredns-7db6d8ff4d-x4jjj" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:50.659840   27502 pod_ready.go:81] duration metric: took 7.253994ms for pod "coredns-7db6d8ff4d-x4jjj" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.659849   27502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.659898   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-845088
	I0729 01:06:50.659907   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.659914   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.659918   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.662415   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:06:50.662913   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:50.662926   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.662934   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.662938   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.665332   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:06:50.665797   27502 pod_ready.go:92] pod "etcd-ha-845088" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:50.665815   27502 pod_ready.go:81] duration metric: took 5.960268ms for pod "etcd-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.665823   27502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.665875   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-845088-m02
	I0729 01:06:50.665882   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.665888   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.665893   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.668354   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:06:50.668806   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:50.668820   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.668827   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.668831   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.671325   27502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 01:06:50.671760   27502 pod_ready.go:92] pod "etcd-ha-845088-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:50.671777   27502 pod_ready.go:81] duration metric: took 5.946655ms for pod "etcd-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.671785   27502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-845088-m03" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:50.823180   27502 request.go:629] Waited for 151.318684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-845088-m03
	I0729 01:06:50.823241   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-845088-m03
	I0729 01:06:50.823246   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:50.823256   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:50.823264   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:50.826515   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:51.022605   27502 request.go:629] Waited for 195.358941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:51.022673   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:51.022679   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:51.022686   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:51.022690   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:51.026093   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:51.026822   27502 pod_ready.go:92] pod "etcd-ha-845088-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:51.026842   27502 pod_ready.go:81] duration metric: took 355.049089ms for pod "etcd-ha-845088-m03" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:51.026864   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:51.222995   27502 request.go:629] Waited for 196.062419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088
	I0729 01:06:51.223044   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088
	I0729 01:06:51.223049   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:51.223073   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:51.223079   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:51.226304   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:51.422404   27502 request.go:629] Waited for 195.275924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:51.422477   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:51.422482   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:51.422489   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:51.422493   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:51.426021   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:51.426717   27502 pod_ready.go:92] pod "kube-apiserver-ha-845088" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:51.426731   27502 pod_ready.go:81] duration metric: took 399.860523ms for pod "kube-apiserver-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:51.426741   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:51.622804   27502 request.go:629] Waited for 195.971586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088-m02
	I0729 01:06:51.622866   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088-m02
	I0729 01:06:51.622874   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:51.622888   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:51.622897   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:51.626804   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:51.823038   27502 request.go:629] Waited for 195.321561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:51.823118   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:51.823127   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:51.823135   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:51.823140   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:51.826588   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:51.827178   27502 pod_ready.go:92] pod "kube-apiserver-ha-845088-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:51.827199   27502 pod_ready.go:81] duration metric: took 400.449389ms for pod "kube-apiserver-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:51.827208   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-845088-m03" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:52.023303   27502 request.go:629] Waited for 196.027124ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088-m03
	I0729 01:06:52.023401   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-845088-m03
	I0729 01:06:52.023413   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:52.023424   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:52.023431   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:52.027029   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:52.223135   27502 request.go:629] Waited for 195.083537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:52.223187   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:52.223192   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:52.223201   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:52.223205   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:52.226835   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:52.227608   27502 pod_ready.go:92] pod "kube-apiserver-ha-845088-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:52.227629   27502 pod_ready.go:81] duration metric: took 400.413096ms for pod "kube-apiserver-ha-845088-m03" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:52.227641   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:52.423124   27502 request.go:629] Waited for 195.414268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088
	I0729 01:06:52.423213   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088
	I0729 01:06:52.423224   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:52.423234   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:52.423244   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:52.426879   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:52.623190   27502 request.go:629] Waited for 195.358566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:52.623245   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:52.623252   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:52.623262   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:52.623266   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:52.626629   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:52.627190   27502 pod_ready.go:92] pod "kube-controller-manager-ha-845088" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:52.627208   27502 pod_ready.go:81] duration metric: took 399.561032ms for pod "kube-controller-manager-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:52.627218   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:52.822296   27502 request.go:629] Waited for 195.014469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088-m02
	I0729 01:06:52.822379   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088-m02
	I0729 01:06:52.822385   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:52.822392   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:52.822397   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:52.826516   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:53.023340   27502 request.go:629] Waited for 196.262158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:53.023397   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:53.023402   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:53.023410   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:53.023417   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:53.026669   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:53.027273   27502 pod_ready.go:92] pod "kube-controller-manager-ha-845088-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:53.027290   27502 pod_ready.go:81] duration metric: took 400.066313ms for pod "kube-controller-manager-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:53.027300   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-845088-m03" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:53.222318   27502 request.go:629] Waited for 194.955355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088-m03
	I0729 01:06:53.222374   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088-m03
	I0729 01:06:53.222379   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:53.222387   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:53.222391   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:53.227575   27502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 01:06:53.423053   27502 request.go:629] Waited for 194.376949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:53.423127   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:53.423133   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:53.423140   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:53.423144   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:53.426733   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:53.427399   27502 pod_ready.go:92] pod "kube-controller-manager-ha-845088-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:53.427419   27502 pod_ready.go:81] duration metric: took 400.112689ms for pod "kube-controller-manager-ha-845088-m03" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:53.427429   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f4965" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:53.622494   27502 request.go:629] Waited for 195.005719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f4965
	I0729 01:06:53.622590   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f4965
	I0729 01:06:53.622602   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:53.622613   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:53.622621   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:53.626301   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:53.822925   27502 request.go:629] Waited for 195.789141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:53.822979   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:53.822985   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:53.822994   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:53.822999   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:53.827869   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:53.828629   27502 pod_ready.go:92] pod "kube-proxy-f4965" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:53.828645   27502 pod_ready.go:81] duration metric: took 401.210506ms for pod "kube-proxy-f4965" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:53.828654   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j6gxl" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:54.022753   27502 request.go:629] Waited for 194.019404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6gxl
	I0729 01:06:54.022808   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6gxl
	I0729 01:06:54.022815   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:54.022827   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:54.022838   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:54.026865   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:54.222914   27502 request.go:629] Waited for 195.356366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:54.222974   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:54.222980   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:54.223002   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:54.223023   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:54.226655   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:54.227272   27502 pod_ready.go:92] pod "kube-proxy-j6gxl" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:54.227292   27502 pod_ready.go:81] duration metric: took 398.631895ms for pod "kube-proxy-j6gxl" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:54.227306   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tmzt7" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:54.422738   27502 request.go:629] Waited for 195.363958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmzt7
	I0729 01:06:54.422789   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmzt7
	I0729 01:06:54.422793   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:54.422801   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:54.422806   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:54.425963   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:54.623126   27502 request.go:629] Waited for 196.438329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:54.623181   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:54.623189   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:54.623200   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:54.623211   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:54.626584   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:54.627203   27502 pod_ready.go:92] pod "kube-proxy-tmzt7" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:54.627224   27502 pod_ready.go:81] duration metric: took 399.909597ms for pod "kube-proxy-tmzt7" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:54.627236   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:54.822276   27502 request.go:629] Waited for 194.971609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088
	I0729 01:06:54.822343   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088
	I0729 01:06:54.822348   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:54.822356   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:54.822360   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:54.825734   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:55.022479   27502 request.go:629] Waited for 196.276271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:55.022554   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088
	I0729 01:06:55.022561   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:55.022571   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:55.022582   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:55.026037   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:55.026626   27502 pod_ready.go:92] pod "kube-scheduler-ha-845088" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:55.026643   27502 pod_ready.go:81] duration metric: took 399.399806ms for pod "kube-scheduler-ha-845088" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:55.026655   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:55.222698   27502 request.go:629] Waited for 195.97885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088-m02
	I0729 01:06:55.222750   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088-m02
	I0729 01:06:55.222756   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:55.222764   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:55.222770   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:55.227134   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:55.422294   27502 request.go:629] Waited for 194.282327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:55.422351   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m02
	I0729 01:06:55.422357   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:55.422364   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:55.422368   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:55.425636   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:55.426269   27502 pod_ready.go:92] pod "kube-scheduler-ha-845088-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:55.426288   27502 pod_ready.go:81] duration metric: took 399.624394ms for pod "kube-scheduler-ha-845088-m02" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:55.426302   27502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-845088-m03" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:55.622660   27502 request.go:629] Waited for 196.27777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088-m03
	I0729 01:06:55.622725   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-845088-m03
	I0729 01:06:55.622732   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:55.622743   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:55.622752   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:55.626385   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:55.822392   27502 request.go:629] Waited for 195.255482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:55.822441   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes/ha-845088-m03
	I0729 01:06:55.822448   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:55.822455   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:55.822459   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:55.825634   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:55.826187   27502 pod_ready.go:92] pod "kube-scheduler-ha-845088-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 01:06:55.826205   27502 pod_ready.go:81] duration metric: took 399.895578ms for pod "kube-scheduler-ha-845088-m03" in "kube-system" namespace to be "Ready" ...
	I0729 01:06:55.826223   27502 pod_ready.go:38] duration metric: took 5.198189101s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 01:06:55.826243   27502 api_server.go:52] waiting for apiserver process to appear ...
	I0729 01:06:55.826292   27502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:06:55.845508   27502 api_server.go:72] duration metric: took 23.071807835s to wait for apiserver process to appear ...
	I0729 01:06:55.845540   27502 api_server.go:88] waiting for apiserver healthz status ...
	I0729 01:06:55.845563   27502 api_server.go:253] Checking apiserver healthz at https://192.168.39.69:8443/healthz ...
	I0729 01:06:55.850162   27502 api_server.go:279] https://192.168.39.69:8443/healthz returned 200:
	ok
	I0729 01:06:55.850230   27502 round_trippers.go:463] GET https://192.168.39.69:8443/version
	I0729 01:06:55.850240   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:55.850251   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:55.850259   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:55.851222   27502 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 01:06:55.851293   27502 api_server.go:141] control plane version: v1.30.3
	I0729 01:06:55.851307   27502 api_server.go:131] duration metric: took 5.76055ms to wait for apiserver health ...
	I0729 01:06:55.851316   27502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 01:06:56.022703   27502 request.go:629] Waited for 171.322479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:06:56.022765   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:06:56.022770   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:56.022777   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:56.022781   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:56.029815   27502 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 01:06:56.036379   27502 system_pods.go:59] 24 kube-system pods found
	I0729 01:06:56.036410   27502 system_pods.go:61] "coredns-7db6d8ff4d-26phs" [0fa00166-935c-4e30-899d-0ae105083984] Running
	I0729 01:06:56.036417   27502 system_pods.go:61] "coredns-7db6d8ff4d-x4jjj" [659a9fc3-a597-401d-9ceb-71a04f049d8c] Running
	I0729 01:06:56.036421   27502 system_pods.go:61] "etcd-ha-845088" [eb889e81-3ece-4af1-8bce-9c3740e8209c] Running
	I0729 01:06:56.036427   27502 system_pods.go:61] "etcd-ha-845088-m02" [e1bd96c5-3618-4f17-aa55-4a0c227cb401] Running
	I0729 01:06:56.036430   27502 system_pods.go:61] "etcd-ha-845088-m03" [3a225030-386d-4e16-875f-bc5ecb3b2692] Running
	I0729 01:06:56.036435   27502 system_pods.go:61] "kindnet-fvw2k" [c0096f64-69dd-4a0f-853f-7798d413bde2] Running
	I0729 01:06:56.036438   27502 system_pods.go:61] "kindnet-jz7gr" [3d184fd2-5bfc-40bd-b7b3-98934d58a689] Running
	I0729 01:06:56.036442   27502 system_pods.go:61] "kindnet-p87gx" [07b16da9-2b6f-45b8-b9a4-0009e6d60925] Running
	I0729 01:06:56.036445   27502 system_pods.go:61] "kube-apiserver-ha-845088" [1fe50c6b-6497-498e-8f2a-c84c3dabdbb3] Running
	I0729 01:06:56.036448   27502 system_pods.go:61] "kube-apiserver-ha-845088-m02" [d7fef5ee-2f47-4b3b-b625-f146578f3164] Running
	I0729 01:06:56.036451   27502 system_pods.go:61] "kube-apiserver-ha-845088-m03" [3062f069-6eba-4418-9778-43689dab75bb] Running
	I0729 01:06:56.036455   27502 system_pods.go:61] "kube-controller-manager-ha-845088" [e58772fb-6dcd-431c-ba7b-cf726504c97e] Running
	I0729 01:06:56.036459   27502 system_pods.go:61] "kube-controller-manager-ha-845088-m02" [e8811503-c081-430f-9191-e1cf1fa1a866] Running
	I0729 01:06:56.036463   27502 system_pods.go:61] "kube-controller-manager-ha-845088-m03" [71e94457-a846-4756-ab5e-9373344a5f4a] Running
	I0729 01:06:56.036469   27502 system_pods.go:61] "kube-proxy-f4965" [23788f31-afa6-43f9-b5ec-2facd23efe4e] Running
	I0729 01:06:56.036472   27502 system_pods.go:61] "kube-proxy-j6gxl" [45f77cb8-2b41-4069-8468-6defe7e0f51e] Running
	I0729 01:06:56.036475   27502 system_pods.go:61] "kube-proxy-tmzt7" [f2e92bb0-87c0-4d4e-ae34-d67970a61dc9] Running
	I0729 01:06:56.036479   27502 system_pods.go:61] "kube-scheduler-ha-845088" [8dd2df88-eb98-4220-a7f5-fe78bd302573] Running
	I0729 01:06:56.036483   27502 system_pods.go:61] "kube-scheduler-ha-845088-m02" [ca68c56a-ffbe-43be-b452-bd6bd7c508ba] Running
	I0729 01:06:56.036486   27502 system_pods.go:61] "kube-scheduler-ha-845088-m03" [a7e34040-d0d4-453a-bc66-d826c253a9e5] Running
	I0729 01:06:56.036489   27502 system_pods.go:61] "kube-vip-ha-845088" [23429e30-003b-4bf2-9ab0-fb4d2a2ee5c8] Running
	I0729 01:06:56.036494   27502 system_pods.go:61] "kube-vip-ha-845088-m02" [4716aa15-53c6-4f56-98a4-1b0697bb355d] Running
	I0729 01:06:56.036497   27502 system_pods.go:61] "kube-vip-ha-845088-m03" [5b8e796c-8556-4cc1-a46d-7c4c23fc43df] Running
	I0729 01:06:56.036500   27502 system_pods.go:61] "storage-provisioner" [9b770bc2-7368-4b86-89ff-399d60f17817] Running
	I0729 01:06:56.036506   27502 system_pods.go:74] duration metric: took 185.184729ms to wait for pod list to return data ...
	I0729 01:06:56.036516   27502 default_sa.go:34] waiting for default service account to be created ...
	I0729 01:06:56.222913   27502 request.go:629] Waited for 186.333292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I0729 01:06:56.222964   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/default/serviceaccounts
	I0729 01:06:56.222968   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:56.222976   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:56.222979   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:56.226385   27502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 01:06:56.226513   27502 default_sa.go:45] found service account: "default"
	I0729 01:06:56.226530   27502 default_sa.go:55] duration metric: took 190.008463ms for default service account to be created ...
	I0729 01:06:56.226537   27502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 01:06:56.422888   27502 request.go:629] Waited for 196.263264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:06:56.422952   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/namespaces/kube-system/pods
	I0729 01:06:56.422962   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:56.422973   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:56.422980   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:56.430352   27502 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 01:06:56.437114   27502 system_pods.go:86] 24 kube-system pods found
	I0729 01:06:56.437138   27502 system_pods.go:89] "coredns-7db6d8ff4d-26phs" [0fa00166-935c-4e30-899d-0ae105083984] Running
	I0729 01:06:56.437144   27502 system_pods.go:89] "coredns-7db6d8ff4d-x4jjj" [659a9fc3-a597-401d-9ceb-71a04f049d8c] Running
	I0729 01:06:56.437148   27502 system_pods.go:89] "etcd-ha-845088" [eb889e81-3ece-4af1-8bce-9c3740e8209c] Running
	I0729 01:06:56.437153   27502 system_pods.go:89] "etcd-ha-845088-m02" [e1bd96c5-3618-4f17-aa55-4a0c227cb401] Running
	I0729 01:06:56.437158   27502 system_pods.go:89] "etcd-ha-845088-m03" [3a225030-386d-4e16-875f-bc5ecb3b2692] Running
	I0729 01:06:56.437165   27502 system_pods.go:89] "kindnet-fvw2k" [c0096f64-69dd-4a0f-853f-7798d413bde2] Running
	I0729 01:06:56.437170   27502 system_pods.go:89] "kindnet-jz7gr" [3d184fd2-5bfc-40bd-b7b3-98934d58a689] Running
	I0729 01:06:56.437180   27502 system_pods.go:89] "kindnet-p87gx" [07b16da9-2b6f-45b8-b9a4-0009e6d60925] Running
	I0729 01:06:56.437186   27502 system_pods.go:89] "kube-apiserver-ha-845088" [1fe50c6b-6497-498e-8f2a-c84c3dabdbb3] Running
	I0729 01:06:56.437194   27502 system_pods.go:89] "kube-apiserver-ha-845088-m02" [d7fef5ee-2f47-4b3b-b625-f146578f3164] Running
	I0729 01:06:56.437201   27502 system_pods.go:89] "kube-apiserver-ha-845088-m03" [3062f069-6eba-4418-9778-43689dab75bb] Running
	I0729 01:06:56.437207   27502 system_pods.go:89] "kube-controller-manager-ha-845088" [e58772fb-6dcd-431c-ba7b-cf726504c97e] Running
	I0729 01:06:56.437214   27502 system_pods.go:89] "kube-controller-manager-ha-845088-m02" [e8811503-c081-430f-9191-e1cf1fa1a866] Running
	I0729 01:06:56.437219   27502 system_pods.go:89] "kube-controller-manager-ha-845088-m03" [71e94457-a846-4756-ab5e-9373344a5f4a] Running
	I0729 01:06:56.437225   27502 system_pods.go:89] "kube-proxy-f4965" [23788f31-afa6-43f9-b5ec-2facd23efe4e] Running
	I0729 01:06:56.437229   27502 system_pods.go:89] "kube-proxy-j6gxl" [45f77cb8-2b41-4069-8468-6defe7e0f51e] Running
	I0729 01:06:56.437235   27502 system_pods.go:89] "kube-proxy-tmzt7" [f2e92bb0-87c0-4d4e-ae34-d67970a61dc9] Running
	I0729 01:06:56.437239   27502 system_pods.go:89] "kube-scheduler-ha-845088" [8dd2df88-eb98-4220-a7f5-fe78bd302573] Running
	I0729 01:06:56.437245   27502 system_pods.go:89] "kube-scheduler-ha-845088-m02" [ca68c56a-ffbe-43be-b452-bd6bd7c508ba] Running
	I0729 01:06:56.437250   27502 system_pods.go:89] "kube-scheduler-ha-845088-m03" [a7e34040-d0d4-453a-bc66-d826c253a9e5] Running
	I0729 01:06:56.437256   27502 system_pods.go:89] "kube-vip-ha-845088" [23429e30-003b-4bf2-9ab0-fb4d2a2ee5c8] Running
	I0729 01:06:56.437260   27502 system_pods.go:89] "kube-vip-ha-845088-m02" [4716aa15-53c6-4f56-98a4-1b0697bb355d] Running
	I0729 01:06:56.437268   27502 system_pods.go:89] "kube-vip-ha-845088-m03" [5b8e796c-8556-4cc1-a46d-7c4c23fc43df] Running
	I0729 01:06:56.437276   27502 system_pods.go:89] "storage-provisioner" [9b770bc2-7368-4b86-89ff-399d60f17817] Running
	I0729 01:06:56.437287   27502 system_pods.go:126] duration metric: took 210.741737ms to wait for k8s-apps to be running ...
	I0729 01:06:56.437299   27502 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 01:06:56.437347   27502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:06:56.454489   27502 system_svc.go:56] duration metric: took 17.1809ms WaitForService to wait for kubelet
	I0729 01:06:56.454519   27502 kubeadm.go:582] duration metric: took 23.680824506s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 01:06:56.454543   27502 node_conditions.go:102] verifying NodePressure condition ...
	I0729 01:06:56.622609   27502 request.go:629] Waited for 167.986442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.69:8443/api/v1/nodes
	I0729 01:06:56.622694   27502 round_trippers.go:463] GET https://192.168.39.69:8443/api/v1/nodes
	I0729 01:06:56.622701   27502 round_trippers.go:469] Request Headers:
	I0729 01:06:56.622711   27502 round_trippers.go:473]     Accept: application/json, */*
	I0729 01:06:56.622716   27502 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 01:06:56.627702   27502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 01:06:56.628848   27502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 01:06:56.628880   27502 node_conditions.go:123] node cpu capacity is 2
	I0729 01:06:56.628894   27502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 01:06:56.628899   27502 node_conditions.go:123] node cpu capacity is 2
	I0729 01:06:56.628904   27502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 01:06:56.628909   27502 node_conditions.go:123] node cpu capacity is 2
	I0729 01:06:56.628915   27502 node_conditions.go:105] duration metric: took 174.365815ms to run NodePressure ...
	I0729 01:06:56.628932   27502 start.go:241] waiting for startup goroutines ...
	I0729 01:06:56.628959   27502 start.go:255] writing updated cluster config ...
	I0729 01:06:56.629322   27502 ssh_runner.go:195] Run: rm -f paused
	I0729 01:06:56.683819   27502 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 01:06:56.685924   27502 out.go:177] * Done! kubectl is now configured to use "ha-845088" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.215803877Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722215497215783839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6522b03-6304-4688-bca8-9197486256cc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.216576664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fafd8dfb-9050-40cf-884f-33bb8b8108b5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.216761472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fafd8dfb-9050-40cf-884f-33bb8b8108b5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.217476999Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:393f89e96685f53ad45043741e5cdeea2a14ac868361b8ec5d1c99fb7fcb80fd,PodSandboxId:077fc92624630d9345f559e83fcc88623c9c9da78c83f2fd03558dbe231bf392,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722215220870631423,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annotations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd54eae7304e5182e5293704abdceb4e9ffd712fa08fad6b3d967463872f0eec,PodSandboxId:0f3c4c82eabf728e46f1292a4d06691059f18ba04ba3d2db8f5e114774d74e19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722215067514800424,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6,PodSandboxId:860aff47921080f197906689ebdac24d8f2d07ce79c9792da378416aeb0b0556,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215067519965802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kubernetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87,PodSandboxId:5998a0c18499b323d8b2f065294e71b0f1b83d8d7e0689683aa373fd912f2676,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215067480426326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a5
97-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198,PodSandboxId:d036858417b617bd3d07094718128ed94a829b79a04481e222a4d007a8cced8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722215055323413886,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8,PodSandboxId:a37edf1e80380d902c014ad30352a41536c6dd919531118f5bfdff6b318b36b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172221505
0132743165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994e26254fd085e2926edf9c656aad1b17c748a39170b459396f42bc335f1b37,PodSandboxId:e6d68b2b55c9842c1d399a7b1fab0b904a885eb0d2000328da1eea0883ec2655,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222150328
94753496,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f4843ded93a5745feef920f67d7033d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46,PodSandboxId:00d828e6fd11cbd1fb3e98ce4070370f2935ac47836270d51eb66a8b845ac201,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722215029963540928,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60,PodSandboxId:64651fd976b6f146df0a71675e4e22c563cd375d3f5da24cf2a480bc054c63af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722215029937490257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0d5f5418f21962309391e2fc61b9ab31ab12afa2e057a4a8bbecf46d934d4c,PodSandboxId:35638eec4b1817e80841b56fd242d92c9a4b263f0d6d53c24eb00c6974712e68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722215029884152650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f40f9b4c14412e1f58e289c0f05c0df36143bb9d0e662b8e6a5ab96bc84ff5,PodSandboxId:88c63df98913c4ba58c90d9d1361d7d198cbb7a524227602b69b52b9e7db9b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722215029837706165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fafd8dfb-9050-40cf-884f-33bb8b8108b5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.259613828Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dbdfb4bf-9095-457f-9953-611ae1f149b5 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.259853897Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dbdfb4bf-9095-457f-9953-611ae1f149b5 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.261102236Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd7cfbac-4aef-42bd-9949-dd5053a7a374 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.261561311Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722215497261538476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd7cfbac-4aef-42bd-9949-dd5053a7a374 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.262142547Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93eb4212-8bb2-40cd-81db-cbf2f5469b81 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.262216861Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93eb4212-8bb2-40cd-81db-cbf2f5469b81 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.262451600Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:393f89e96685f53ad45043741e5cdeea2a14ac868361b8ec5d1c99fb7fcb80fd,PodSandboxId:077fc92624630d9345f559e83fcc88623c9c9da78c83f2fd03558dbe231bf392,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722215220870631423,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annotations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd54eae7304e5182e5293704abdceb4e9ffd712fa08fad6b3d967463872f0eec,PodSandboxId:0f3c4c82eabf728e46f1292a4d06691059f18ba04ba3d2db8f5e114774d74e19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722215067514800424,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6,PodSandboxId:860aff47921080f197906689ebdac24d8f2d07ce79c9792da378416aeb0b0556,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215067519965802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kubernetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87,PodSandboxId:5998a0c18499b323d8b2f065294e71b0f1b83d8d7e0689683aa373fd912f2676,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215067480426326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a5
97-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198,PodSandboxId:d036858417b617bd3d07094718128ed94a829b79a04481e222a4d007a8cced8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722215055323413886,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8,PodSandboxId:a37edf1e80380d902c014ad30352a41536c6dd919531118f5bfdff6b318b36b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172221505
0132743165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994e26254fd085e2926edf9c656aad1b17c748a39170b459396f42bc335f1b37,PodSandboxId:e6d68b2b55c9842c1d399a7b1fab0b904a885eb0d2000328da1eea0883ec2655,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222150328
94753496,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f4843ded93a5745feef920f67d7033d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46,PodSandboxId:00d828e6fd11cbd1fb3e98ce4070370f2935ac47836270d51eb66a8b845ac201,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722215029963540928,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60,PodSandboxId:64651fd976b6f146df0a71675e4e22c563cd375d3f5da24cf2a480bc054c63af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722215029937490257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0d5f5418f21962309391e2fc61b9ab31ab12afa2e057a4a8bbecf46d934d4c,PodSandboxId:35638eec4b1817e80841b56fd242d92c9a4b263f0d6d53c24eb00c6974712e68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722215029884152650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f40f9b4c14412e1f58e289c0f05c0df36143bb9d0e662b8e6a5ab96bc84ff5,PodSandboxId:88c63df98913c4ba58c90d9d1361d7d198cbb7a524227602b69b52b9e7db9b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722215029837706165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=93eb4212-8bb2-40cd-81db-cbf2f5469b81 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.311451160Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=843bea72-8997-4167-ba58-427a5136e43f name=/runtime.v1.RuntimeService/Version
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.311526268Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=843bea72-8997-4167-ba58-427a5136e43f name=/runtime.v1.RuntimeService/Version
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.321400270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c607ea6-8a9a-4833-b3ea-448c2780e9e9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.322180221Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722215497322154361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c607ea6-8a9a-4833-b3ea-448c2780e9e9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.322818133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59255f64-cca9-4081-a9bd-2d78a1803d1b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.322884207Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59255f64-cca9-4081-a9bd-2d78a1803d1b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.323218624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:393f89e96685f53ad45043741e5cdeea2a14ac868361b8ec5d1c99fb7fcb80fd,PodSandboxId:077fc92624630d9345f559e83fcc88623c9c9da78c83f2fd03558dbe231bf392,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722215220870631423,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annotations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd54eae7304e5182e5293704abdceb4e9ffd712fa08fad6b3d967463872f0eec,PodSandboxId:0f3c4c82eabf728e46f1292a4d06691059f18ba04ba3d2db8f5e114774d74e19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722215067514800424,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6,PodSandboxId:860aff47921080f197906689ebdac24d8f2d07ce79c9792da378416aeb0b0556,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215067519965802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kubernetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87,PodSandboxId:5998a0c18499b323d8b2f065294e71b0f1b83d8d7e0689683aa373fd912f2676,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215067480426326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a5
97-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198,PodSandboxId:d036858417b617bd3d07094718128ed94a829b79a04481e222a4d007a8cced8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722215055323413886,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8,PodSandboxId:a37edf1e80380d902c014ad30352a41536c6dd919531118f5bfdff6b318b36b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172221505
0132743165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994e26254fd085e2926edf9c656aad1b17c748a39170b459396f42bc335f1b37,PodSandboxId:e6d68b2b55c9842c1d399a7b1fab0b904a885eb0d2000328da1eea0883ec2655,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222150328
94753496,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f4843ded93a5745feef920f67d7033d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46,PodSandboxId:00d828e6fd11cbd1fb3e98ce4070370f2935ac47836270d51eb66a8b845ac201,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722215029963540928,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60,PodSandboxId:64651fd976b6f146df0a71675e4e22c563cd375d3f5da24cf2a480bc054c63af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722215029937490257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0d5f5418f21962309391e2fc61b9ab31ab12afa2e057a4a8bbecf46d934d4c,PodSandboxId:35638eec4b1817e80841b56fd242d92c9a4b263f0d6d53c24eb00c6974712e68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722215029884152650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f40f9b4c14412e1f58e289c0f05c0df36143bb9d0e662b8e6a5ab96bc84ff5,PodSandboxId:88c63df98913c4ba58c90d9d1361d7d198cbb7a524227602b69b52b9e7db9b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722215029837706165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59255f64-cca9-4081-a9bd-2d78a1803d1b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.367880923Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5166a103-baf4-4c2d-9574-845d958d9b0b name=/runtime.v1.RuntimeService/Version
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.367955990Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5166a103-baf4-4c2d-9574-845d958d9b0b name=/runtime.v1.RuntimeService/Version
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.370283987Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a90da75-248f-4e73-98e7-3c71077e1c2a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.371524913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722215497371496673,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a90da75-248f-4e73-98e7-3c71077e1c2a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.375314923Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6726ff5-1e6b-4edb-a412-f53f27ce9266 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.375369760Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6726ff5-1e6b-4edb-a412-f53f27ce9266 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:11:37 ha-845088 crio[684]: time="2024-07-29 01:11:37.375579907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:393f89e96685f53ad45043741e5cdeea2a14ac868361b8ec5d1c99fb7fcb80fd,PodSandboxId:077fc92624630d9345f559e83fcc88623c9c9da78c83f2fd03558dbe231bf392,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722215220870631423,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annotations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd54eae7304e5182e5293704abdceb4e9ffd712fa08fad6b3d967463872f0eec,PodSandboxId:0f3c4c82eabf728e46f1292a4d06691059f18ba04ba3d2db8f5e114774d74e19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722215067514800424,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6,PodSandboxId:860aff47921080f197906689ebdac24d8f2d07ce79c9792da378416aeb0b0556,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215067519965802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kubernetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87,PodSandboxId:5998a0c18499b323d8b2f065294e71b0f1b83d8d7e0689683aa373fd912f2676,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215067480426326,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a5
97-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198,PodSandboxId:d036858417b617bd3d07094718128ed94a829b79a04481e222a4d007a8cced8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722215055323413886,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8,PodSandboxId:a37edf1e80380d902c014ad30352a41536c6dd919531118f5bfdff6b318b36b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172221505
0132743165,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994e26254fd085e2926edf9c656aad1b17c748a39170b459396f42bc335f1b37,PodSandboxId:e6d68b2b55c9842c1d399a7b1fab0b904a885eb0d2000328da1eea0883ec2655,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222150328
94753496,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f4843ded93a5745feef920f67d7033d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46,PodSandboxId:00d828e6fd11cbd1fb3e98ce4070370f2935ac47836270d51eb66a8b845ac201,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722215029963540928,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60,PodSandboxId:64651fd976b6f146df0a71675e4e22c563cd375d3f5da24cf2a480bc054c63af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722215029937490257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0d5f5418f21962309391e2fc61b9ab31ab12afa2e057a4a8bbecf46d934d4c,PodSandboxId:35638eec4b1817e80841b56fd242d92c9a4b263f0d6d53c24eb00c6974712e68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722215029884152650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f40f9b4c14412e1f58e289c0f05c0df36143bb9d0e662b8e6a5ab96bc84ff5,PodSandboxId:88c63df98913c4ba58c90d9d1361d7d198cbb7a524227602b69b52b9e7db9b27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722215029837706165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6726ff5-1e6b-4edb-a412-f53f27ce9266 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	393f89e96685f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   077fc92624630       busybox-fc5497c4f-kdxhf
	102a2205a11ac       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   860aff4792108       coredns-7db6d8ff4d-26phs
	dd54eae7304e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   0f3c4c82eabf7       storage-provisioner
	4c9a1e2ce8399       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   5998a0c18499b       coredns-7db6d8ff4d-x4jjj
	b117823d9ea03       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   d036858417b61       kindnet-jz7gr
	ba58523a71dfb       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   a37edf1e80380       kube-proxy-tmzt7
	994e26254fd08       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   e6d68b2b55c98       kube-vip-ha-845088
	2d545f40bcf5d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   00d828e6fd11c       etcd-ha-845088
	71cb29192a2ff       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   64651fd976b6f       kube-scheduler-ha-845088
	2f0d5f5418f21       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   35638eec4b181       kube-controller-manager-ha-845088
	32f40f9b4c144       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   88c63df98913c       kube-apiserver-ha-845088
	
	
	==> coredns [102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6] <==
	[INFO] 10.244.1.2:39393 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00025886s
	[INFO] 10.244.0.4:38271 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097383s
	[INFO] 10.244.0.4:51459 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018403s
	[INFO] 10.244.0.4:45452 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000208939s
	[INFO] 10.244.0.4:33630 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083936s
	[INFO] 10.244.0.4:56145 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107111s
	[INFO] 10.244.0.4:49547 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013737s
	[INFO] 10.244.2.2:50551 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157425s
	[INFO] 10.244.2.2:54720 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002000849s
	[INFO] 10.244.2.2:46977 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133922s
	[INFO] 10.244.2.2:52278 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098427s
	[INFO] 10.244.2.2:33523 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166768s
	[INFO] 10.244.2.2:56762 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127309s
	[INFO] 10.244.1.2:60690 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162836s
	[INFO] 10.244.0.4:53481 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125124s
	[INFO] 10.244.0.4:36302 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006046s
	[INFO] 10.244.2.2:51131 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200754s
	[INFO] 10.244.2.2:35216 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135186s
	[INFO] 10.244.2.2:47188 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095941s
	[INFO] 10.244.2.2:45175 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088023s
	[INFO] 10.244.1.2:53946 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000271227s
	[INFO] 10.244.0.4:35507 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089711s
	[INFO] 10.244.0.4:48138 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000191709s
	[INFO] 10.244.2.2:46681 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084718s
	[INFO] 10.244.2.2:58403 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000190529s
	
	
	==> coredns [4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87] <==
	[INFO] 10.244.0.4:49094 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000115404s
	[INFO] 10.244.2.2:58484 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183253s
	[INFO] 10.244.2.2:50917 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000093443s
	[INFO] 10.244.1.2:40330 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137004s
	[INFO] 10.244.1.2:40312 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003445772s
	[INFO] 10.244.1.2:54896 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000275281s
	[INFO] 10.244.1.2:36709 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149351s
	[INFO] 10.244.1.2:35599 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014616s
	[INFO] 10.244.1.2:40232 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145035s
	[INFO] 10.244.0.4:42879 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002077041s
	[INFO] 10.244.0.4:46236 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001377262s
	[INFO] 10.244.2.2:60143 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018397s
	[INFO] 10.244.2.2:33059 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001229041s
	[INFO] 10.244.1.2:50949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114887s
	[INFO] 10.244.1.2:41895 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099234s
	[INFO] 10.244.1.2:57885 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008087s
	[INFO] 10.244.0.4:46809 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000202377s
	[INFO] 10.244.0.4:54702 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067695s
	[INFO] 10.244.1.2:33676 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193639s
	[INFO] 10.244.1.2:35018 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014376s
	[INFO] 10.244.1.2:58362 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164011s
	[INFO] 10.244.0.4:42745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108289s
	[INFO] 10.244.0.4:38059 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000080482s
	[INFO] 10.244.2.2:57416 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132756s
	[INFO] 10.244.2.2:34696 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000282968s
	
	
	==> describe nodes <==
	Name:               ha-845088
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-845088
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=ha-845088
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T01_03_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:03:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-845088
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:11:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:07:30 +0000   Mon, 29 Jul 2024 01:03:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:07:30 +0000   Mon, 29 Jul 2024 01:03:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:07:30 +0000   Mon, 29 Jul 2024 01:03:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:07:30 +0000   Mon, 29 Jul 2024 01:04:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    ha-845088
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbb04d72e92946e88c1da68d30c7bef3
	  System UUID:                fbb04d72-e929-46e8-8c1d-a68d30c7bef3
	  Boot ID:                    8609abf0-fb2f-4316-bc25-edde00b876e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kdxhf              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 coredns-7db6d8ff4d-26phs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m28s
	  kube-system                 coredns-7db6d8ff4d-x4jjj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m28s
	  kube-system                 etcd-ha-845088                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m41s
	  kube-system                 kindnet-jz7gr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m29s
	  kube-system                 kube-apiserver-ha-845088             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m41s
	  kube-system                 kube-controller-manager-ha-845088    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m41s
	  kube-system                 kube-proxy-tmzt7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-scheduler-ha-845088             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m41s
	  kube-system                 kube-vip-ha-845088                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m43s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m27s  kube-proxy       
	  Normal  Starting                 7m41s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m41s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m41s  kubelet          Node ha-845088 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m41s  kubelet          Node ha-845088 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m41s  kubelet          Node ha-845088 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m29s  node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	  Normal  NodeReady                7m11s  kubelet          Node ha-845088 status is now: NodeReady
	  Normal  RegisteredNode           6m6s   node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	  Normal  RegisteredNode           4m51s  node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	
	
	Name:               ha-845088-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-845088-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=ha-845088
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T01_05_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:05:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-845088-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:08:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 01:07:15 +0000   Mon, 29 Jul 2024 01:08:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 01:07:15 +0000   Mon, 29 Jul 2024 01:08:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 01:07:15 +0000   Mon, 29 Jul 2024 01:08:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 01:07:15 +0000   Mon, 29 Jul 2024 01:08:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-845088-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 71d77df4f03a4876b498a96bcef9ff64
	  System UUID:                71d77df4-f03a-4876-b498-a96bcef9ff64
	  Boot ID:                    9f6c4b85-e410-4558-8767-01550bcc9b1c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dbfgn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 etcd-ha-845088-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m22s
	  kube-system                 kindnet-p87gx                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m24s
	  kube-system                 kube-apiserver-ha-845088-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-controller-manager-ha-845088-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-proxy-j6gxl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-scheduler-ha-845088-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-vip-ha-845088-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m19s                  kube-proxy       
	  Normal  RegisteredNode           6m24s                  node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	  Normal  NodeHasSufficientMemory  6m24s (x8 over 6m24s)  kubelet          Node ha-845088-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m24s (x8 over 6m24s)  kubelet          Node ha-845088-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m24s (x7 over 6m24s)  kubelet          Node ha-845088-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m6s                   node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	  Normal  NodeNotReady             2m39s                  node-controller  Node ha-845088-m02 status is now: NodeNotReady
	
	
	Name:               ha-845088-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-845088-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=ha-845088
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T01_06_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:06:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-845088-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:11:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:07:30 +0000   Mon, 29 Jul 2024 01:06:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:07:30 +0000   Mon, 29 Jul 2024 01:06:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:07:30 +0000   Mon, 29 Jul 2024 01:06:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:07:30 +0000   Mon, 29 Jul 2024 01:06:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.243
	  Hostname:    ha-845088-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a156142ecc543bebea07e4da7f3d99e
	  System UUID:                1a156142-ecc5-43be-bea0-7e4da7f3d99e
	  Boot ID:                    cfe16ffe-c16a-4205-be07-6a555787e997
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wvsr6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 etcd-ha-845088-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m7s
	  kube-system                 kindnet-fvw2k                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m9s
	  kube-system                 kube-apiserver-ha-845088-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-controller-manager-ha-845088-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-proxy-f4965                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-scheduler-ha-845088-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-vip-ha-845088-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m9s (x8 over 5m9s)  kubelet          Node ha-845088-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m9s (x8 over 5m9s)  kubelet          Node ha-845088-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m9s (x7 over 5m9s)  kubelet          Node ha-845088-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m6s                 node-controller  Node ha-845088-m03 event: Registered Node ha-845088-m03 in Controller
	  Normal  RegisteredNode           5m4s                 node-controller  Node ha-845088-m03 event: Registered Node ha-845088-m03 in Controller
	  Normal  RegisteredNode           4m51s                node-controller  Node ha-845088-m03 event: Registered Node ha-845088-m03 in Controller
	
	
	Name:               ha-845088-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-845088-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=ha-845088
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T01_07_37_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:07:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-845088-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:11:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:08:07 +0000   Mon, 29 Jul 2024 01:07:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:08:07 +0000   Mon, 29 Jul 2024 01:07:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:08:07 +0000   Mon, 29 Jul 2024 01:07:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:08:07 +0000   Mon, 29 Jul 2024 01:07:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.136
	  Hostname:    ha-845088-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f15978c17b794a0dab280aaa8e6fe8a4
	  System UUID:                f15978c1-7b79-4a0d-ab28-0aaa8e6fe8a4
	  Boot ID:                    0bfe37db-c4f2-4e8b-9f45-1737af272bfb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rffd2       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m
	  kube-system                 kube-proxy-bbp9f    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m1s (x2 over 4m1s)  kubelet          Node ha-845088-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x2 over 4m1s)  kubelet          Node ha-845088-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x2 over 4m1s)  kubelet          Node ha-845088-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal  RegisteredNode           3m56s                node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal  RegisteredNode           3m56s                node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal  NodeReady                3m40s                kubelet          Node ha-845088-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul29 01:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050829] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039959] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.779563] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.551910] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.576682] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.177713] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.054473] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057858] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.159603] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.120915] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.261683] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.164596] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +4.624660] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.060939] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.270727] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.083870] kauditd_printk_skb: 79 callbacks suppressed
	[Jul29 01:04] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.392423] kauditd_printk_skb: 29 callbacks suppressed
	[Jul29 01:05] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46] <==
	{"level":"warn","ts":"2024-07-29T01:11:37.343778Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.637075Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.641985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.648569Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.652492Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.671123Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.678342Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.685144Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.688278Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.690935Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.70156Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.708146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.71512Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.719977Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.723261Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.731929Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.736248Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.741381Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.747784Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.752854Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.756482Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.765934Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.772659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.779343Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T01:11:37.836209Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9199217ddd03919b","from":"9199217ddd03919b","remote-peer-id":"971410e140380cd2","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 01:11:37 up 8 min,  0 users,  load average: 0.15, 0.33, 0.19
	Linux ha-845088 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198] <==
	I0729 01:11:06.416887       1 main.go:299] handling current node
	I0729 01:11:16.406896       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:11:16.407129       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:11:16.407337       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0729 01:11:16.407364       1 main.go:322] Node ha-845088-m03 has CIDR [10.244.2.0/24] 
	I0729 01:11:16.407427       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:11:16.407462       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:11:16.407547       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:11:16.407565       1 main.go:299] handling current node
	I0729 01:11:26.415460       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:11:26.415602       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:11:26.415797       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:11:26.415842       1 main.go:299] handling current node
	I0729 01:11:26.415870       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:11:26.415888       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:11:26.415982       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0729 01:11:26.416101       1 main.go:322] Node ha-845088-m03 has CIDR [10.244.2.0/24] 
	I0729 01:11:36.411498       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0729 01:11:36.411652       1 main.go:322] Node ha-845088-m03 has CIDR [10.244.2.0/24] 
	I0729 01:11:36.411874       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:11:36.411913       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:11:36.412200       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:11:36.412241       1 main.go:299] handling current node
	I0729 01:11:36.412283       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:11:36.412300       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [32f40f9b4c14412e1f58e289c0f05c0df36143bb9d0e662b8e6a5ab96bc84ff5] <==
	I0729 01:03:56.166924       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 01:03:56.190107       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 01:03:56.204710       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 01:04:08.624663       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0729 01:04:09.319132       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0729 01:07:02.499914       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45894: use of closed network connection
	E0729 01:07:02.709908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45926: use of closed network connection
	E0729 01:07:02.904847       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45946: use of closed network connection
	E0729 01:07:03.125741       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45970: use of closed network connection
	E0729 01:07:03.327461       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45992: use of closed network connection
	E0729 01:07:03.511852       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46010: use of closed network connection
	E0729 01:07:03.688089       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46016: use of closed network connection
	E0729 01:07:03.874804       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46038: use of closed network connection
	E0729 01:07:04.067333       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46056: use of closed network connection
	E0729 01:07:04.359571       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46086: use of closed network connection
	E0729 01:07:04.538798       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46114: use of closed network connection
	E0729 01:07:04.722192       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46140: use of closed network connection
	E0729 01:07:04.908798       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46144: use of closed network connection
	E0729 01:07:05.114947       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46156: use of closed network connection
	E0729 01:07:05.332224       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46178: use of closed network connection
	I0729 01:07:41.128643       1 trace.go:236] Trace[1411284324]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:01f87a8c-35ed-4845-8e04-6282cab007be,client:192.168.39.254,api-group:coordination.k8s.io,api-version:v1,name:ha-845088,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-845088,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:PUT (29-Jul-2024 01:07:40.627) (total time: 501ms):
	Trace[1411284324]: ["GuaranteedUpdate etcd3" audit-id:01f87a8c-35ed-4845-8e04-6282cab007be,key:/leases/kube-node-lease/ha-845088,type:*coordination.Lease,resource:leases.coordination.k8s.io 500ms (01:07:40.627)
	Trace[1411284324]:  ---"Txn call completed" 499ms (01:07:41.128)]
	Trace[1411284324]: [501.02857ms] [501.02857ms] END
	W0729 01:08:24.986762       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.243 192.168.39.69]
	
	
	==> kube-controller-manager [2f0d5f5418f21962309391e2fc61b9ab31ab12afa2e057a4a8bbecf46d934d4c] <==
	I0729 01:06:57.852936       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="209.137318ms"
	I0729 01:06:57.933836       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.607458ms"
	I0729 01:06:57.987372       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.478288ms"
	I0729 01:06:57.987505       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.447µs"
	I0729 01:06:58.110396       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.228638ms"
	I0729 01:06:58.111457       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.469µs"
	I0729 01:06:58.835097       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.763µs"
	I0729 01:06:59.038789       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.09µs"
	I0729 01:06:59.046419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.438µs"
	I0729 01:06:59.057199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.482µs"
	I0729 01:07:00.756424       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.151709ms"
	I0729 01:07:00.756571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.993µs"
	I0729 01:07:01.930100       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.212087ms"
	I0729 01:07:01.930424       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="129.955µs"
	I0729 01:07:02.062845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.844485ms"
	I0729 01:07:02.063913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.032µs"
	E0729 01:07:36.706535       1 certificate_controller.go:146] Sync csr-4grvg failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-4grvg": the object has been modified; please apply your changes to the latest version and try again
	E0729 01:07:36.733899       1 certificate_controller.go:146] Sync csr-4grvg failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-4grvg": the object has been modified; please apply your changes to the latest version and try again
	I0729 01:07:36.982348       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-845088-m04\" does not exist"
	I0729 01:07:37.025140       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-845088-m04" podCIDRs=["10.244.3.0/24"]
	I0729 01:07:38.805985       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-845088-m04"
	I0729 01:07:57.447108       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-845088-m04"
	I0729 01:08:58.851112       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-845088-m04"
	I0729 01:08:59.008338       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.730323ms"
	I0729 01:08:59.008459       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.148µs"
	
	
	==> kube-proxy [ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8] <==
	I0729 01:04:10.440856       1 server_linux.go:69] "Using iptables proxy"
	I0729 01:04:10.458819       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	I0729 01:04:10.509960       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 01:04:10.510100       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 01:04:10.510134       1 server_linux.go:165] "Using iptables Proxier"
	I0729 01:04:10.513768       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 01:04:10.514374       1 server.go:872] "Version info" version="v1.30.3"
	I0729 01:04:10.514479       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:04:10.516370       1 config.go:192] "Starting service config controller"
	I0729 01:04:10.516560       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 01:04:10.516607       1 config.go:101] "Starting endpoint slice config controller"
	I0729 01:04:10.516625       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 01:04:10.519592       1 config.go:319] "Starting node config controller"
	I0729 01:04:10.519693       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 01:04:10.617213       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 01:04:10.617252       1 shared_informer.go:320] Caches are synced for service config
	I0729 01:04:10.619851       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60] <==
	W0729 01:03:54.330161       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 01:03:54.330212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 01:03:54.351760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 01:03:54.351883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 01:03:54.367219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 01:03:54.367313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 01:03:54.423091       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 01:03:54.423258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 01:03:54.559664       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 01:03:54.559712       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0729 01:03:56.609195       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 01:07:37.083711       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rffd2\": pod kindnet-rffd2 is already assigned to node \"ha-845088-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rffd2" node="ha-845088-m04"
	E0729 01:07:37.083959       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 06c4010c-e52d-4782-8c8d-05b8aed68ae1(kube-system/kindnet-rffd2) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rffd2"
	E0729 01:07:37.083992       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rffd2\": pod kindnet-rffd2 is already assigned to node \"ha-845088-m04\"" pod="kube-system/kindnet-rffd2"
	I0729 01:07:37.084075       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rffd2" node="ha-845088-m04"
	E0729 01:07:37.098504       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-zsmqf\": pod kube-proxy-zsmqf is already assigned to node \"ha-845088-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-zsmqf" node="ha-845088-m04"
	E0729 01:07:37.098571       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2ba8ef1a-2849-40e5-b08d-a44513494774(kube-system/kube-proxy-zsmqf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-zsmqf"
	E0729 01:07:37.098594       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zsmqf\": pod kube-proxy-zsmqf is already assigned to node \"ha-845088-m04\"" pod="kube-system/kube-proxy-zsmqf"
	I0729 01:07:37.098636       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-zsmqf" node="ha-845088-m04"
	E0729 01:07:37.177814       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x248x\": pod kindnet-x248x is already assigned to node \"ha-845088-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-x248x" node="ha-845088-m04"
	E0729 01:07:37.177910       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x248x\": pod kindnet-x248x is already assigned to node \"ha-845088-m04\"" pod="kube-system/kindnet-x248x"
	E0729 01:07:38.118636       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-bbp9f\": pod kube-proxy-bbp9f is already assigned to node \"ha-845088-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-bbp9f" node="ha-845088-m04"
	E0729 01:07:38.118713       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 5917b1fe-1ae9-4713-9760-1dc324ac52d3(kube-system/kube-proxy-bbp9f) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-bbp9f"
	E0729 01:07:38.118752       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-bbp9f\": pod kube-proxy-bbp9f is already assigned to node \"ha-845088-m04\"" pod="kube-system/kube-proxy-bbp9f"
	I0729 01:07:38.118774       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-bbp9f" node="ha-845088-m04"
	
	
	==> kubelet <==
	Jul 29 01:06:56 ha-845088 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:06:56 ha-845088 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:06:57 ha-845088 kubelet[1372]: I0729 01:06:57.610708    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-x4jjj" podStartSLOduration=168.610584009 podStartE2EDuration="2m48.610584009s" podCreationTimestamp="2024-07-29 01:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-29 01:04:28.358229614 +0000 UTC m=+32.399002888" watchObservedRunningTime="2024-07-29 01:06:57.610584009 +0000 UTC m=+181.651357290"
	Jul 29 01:06:57 ha-845088 kubelet[1372]: I0729 01:06:57.612613    1372 topology_manager.go:215] "Topology Admit Handler" podUID="3d626cc7-0294-43eb-903b-83ee7ea03f3d" podNamespace="default" podName="busybox-fc5497c4f-kdxhf"
	Jul 29 01:06:57 ha-845088 kubelet[1372]: I0729 01:06:57.718186    1372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n4lp\" (UniqueName: \"kubernetes.io/projected/3d626cc7-0294-43eb-903b-83ee7ea03f3d-kube-api-access-6n4lp\") pod \"busybox-fc5497c4f-kdxhf\" (UID: \"3d626cc7-0294-43eb-903b-83ee7ea03f3d\") " pod="default/busybox-fc5497c4f-kdxhf"
	Jul 29 01:07:56 ha-845088 kubelet[1372]: E0729 01:07:56.143984    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:07:56 ha-845088 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:07:56 ha-845088 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:07:56 ha-845088 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:07:56 ha-845088 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:08:56 ha-845088 kubelet[1372]: E0729 01:08:56.145752    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:08:56 ha-845088 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:08:56 ha-845088 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:08:56 ha-845088 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:08:56 ha-845088 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:09:56 ha-845088 kubelet[1372]: E0729 01:09:56.145167    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:09:56 ha-845088 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:09:56 ha-845088 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:09:56 ha-845088 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:09:56 ha-845088 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:10:56 ha-845088 kubelet[1372]: E0729 01:10:56.145571    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:10:56 ha-845088 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:10:56 ha-845088 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:10:56 ha-845088 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:10:56 ha-845088 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-845088 -n ha-845088
helpers_test.go:261: (dbg) Run:  kubectl --context ha-845088 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (55.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-845088 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-845088 -v=7 --alsologtostderr
E0729 01:12:23.071914   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
E0729 01:12:50.754630   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-845088 -v=7 --alsologtostderr: exit status 82 (2m1.853823649s)

                                                
                                                
-- stdout --
	* Stopping node "ha-845088-m04"  ...
	* Stopping node "ha-845088-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:11:39.209550   33413 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:11:39.209854   33413 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:11:39.209866   33413 out.go:304] Setting ErrFile to fd 2...
	I0729 01:11:39.209871   33413 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:11:39.210029   33413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:11:39.210292   33413 out.go:298] Setting JSON to false
	I0729 01:11:39.210402   33413 mustload.go:65] Loading cluster: ha-845088
	I0729 01:11:39.210865   33413 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:11:39.210984   33413 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:11:39.211396   33413 mustload.go:65] Loading cluster: ha-845088
	I0729 01:11:39.211589   33413 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:11:39.211641   33413 stop.go:39] StopHost: ha-845088-m04
	I0729 01:11:39.212079   33413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:39.212136   33413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:39.227242   33413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41503
	I0729 01:11:39.227725   33413 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:39.228371   33413 main.go:141] libmachine: Using API Version  1
	I0729 01:11:39.228405   33413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:39.228730   33413 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:39.230680   33413 out.go:177] * Stopping node "ha-845088-m04"  ...
	I0729 01:11:39.232075   33413 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 01:11:39.232110   33413 main.go:141] libmachine: (ha-845088-m04) Calling .DriverName
	I0729 01:11:39.232319   33413 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 01:11:39.232343   33413 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHHostname
	I0729 01:11:39.235027   33413 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:39.235418   33413 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:07:20 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:11:39.235443   33413 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:11:39.235588   33413 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHPort
	I0729 01:11:39.235773   33413 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHKeyPath
	I0729 01:11:39.235907   33413 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHUsername
	I0729 01:11:39.236030   33413 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m04/id_rsa Username:docker}
	I0729 01:11:39.322004   33413 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 01:11:39.375028   33413 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 01:11:39.429485   33413 main.go:141] libmachine: Stopping "ha-845088-m04"...
	I0729 01:11:39.429513   33413 main.go:141] libmachine: (ha-845088-m04) Calling .GetState
	I0729 01:11:39.430945   33413 main.go:141] libmachine: (ha-845088-m04) Calling .Stop
	I0729 01:11:39.434488   33413 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 0/120
	I0729 01:11:40.595749   33413 main.go:141] libmachine: (ha-845088-m04) Calling .GetState
	I0729 01:11:40.597425   33413 main.go:141] libmachine: Machine "ha-845088-m04" was stopped.
	I0729 01:11:40.597442   33413 stop.go:75] duration metric: took 1.365373785s to stop
	I0729 01:11:40.597477   33413 stop.go:39] StopHost: ha-845088-m03
	I0729 01:11:40.597770   33413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:11:40.597817   33413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:11:40.612517   33413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34861
	I0729 01:11:40.613056   33413 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:11:40.613520   33413 main.go:141] libmachine: Using API Version  1
	I0729 01:11:40.613546   33413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:11:40.613793   33413 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:11:40.615621   33413 out.go:177] * Stopping node "ha-845088-m03"  ...
	I0729 01:11:40.616743   33413 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 01:11:40.616767   33413 main.go:141] libmachine: (ha-845088-m03) Calling .DriverName
	I0729 01:11:40.616957   33413 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 01:11:40.616978   33413 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHHostname
	I0729 01:11:40.620183   33413 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:40.620621   33413 main.go:141] libmachine: (ha-845088-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:6a:ee", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:05:54 +0000 UTC Type:0 Mac:52:54:00:67:6a:ee Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:ha-845088-m03 Clientid:01:52:54:00:67:6a:ee}
	I0729 01:11:40.620658   33413 main.go:141] libmachine: (ha-845088-m03) DBG | domain ha-845088-m03 has defined IP address 192.168.39.243 and MAC address 52:54:00:67:6a:ee in network mk-ha-845088
	I0729 01:11:40.620776   33413 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHPort
	I0729 01:11:40.620943   33413 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHKeyPath
	I0729 01:11:40.621073   33413 main.go:141] libmachine: (ha-845088-m03) Calling .GetSSHUsername
	I0729 01:11:40.621199   33413 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m03/id_rsa Username:docker}
	I0729 01:11:40.710196   33413 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 01:11:40.763045   33413 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 01:11:40.819260   33413 main.go:141] libmachine: Stopping "ha-845088-m03"...
	I0729 01:11:40.819289   33413 main.go:141] libmachine: (ha-845088-m03) Calling .GetState
	I0729 01:11:40.820825   33413 main.go:141] libmachine: (ha-845088-m03) Calling .Stop
	I0729 01:11:40.824464   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 0/120
	I0729 01:11:41.825797   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 1/120
	I0729 01:11:42.827102   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 2/120
	I0729 01:11:43.828400   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 3/120
	I0729 01:11:44.830024   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 4/120
	I0729 01:11:45.831947   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 5/120
	I0729 01:11:46.833582   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 6/120
	I0729 01:11:47.835175   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 7/120
	I0729 01:11:48.836862   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 8/120
	I0729 01:11:49.838540   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 9/120
	I0729 01:11:50.840487   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 10/120
	I0729 01:11:51.841951   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 11/120
	I0729 01:11:52.843509   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 12/120
	I0729 01:11:53.845086   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 13/120
	I0729 01:11:54.846653   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 14/120
	I0729 01:11:55.848065   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 15/120
	I0729 01:11:56.849723   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 16/120
	I0729 01:11:57.851492   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 17/120
	I0729 01:11:58.853070   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 18/120
	I0729 01:11:59.854641   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 19/120
	I0729 01:12:00.856356   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 20/120
	I0729 01:12:01.857726   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 21/120
	I0729 01:12:02.859250   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 22/120
	I0729 01:12:03.860875   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 23/120
	I0729 01:12:04.862396   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 24/120
	I0729 01:12:05.864342   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 25/120
	I0729 01:12:06.865865   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 26/120
	I0729 01:12:07.867467   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 27/120
	I0729 01:12:08.868904   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 28/120
	I0729 01:12:09.870469   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 29/120
	I0729 01:12:10.872486   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 30/120
	I0729 01:12:11.874030   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 31/120
	I0729 01:12:12.875539   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 32/120
	I0729 01:12:13.876903   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 33/120
	I0729 01:12:14.878306   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 34/120
	I0729 01:12:15.880121   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 35/120
	I0729 01:12:16.881595   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 36/120
	I0729 01:12:17.884106   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 37/120
	I0729 01:12:18.885675   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 38/120
	I0729 01:12:19.887187   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 39/120
	I0729 01:12:20.888629   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 40/120
	I0729 01:12:21.890044   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 41/120
	I0729 01:12:22.891383   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 42/120
	I0729 01:12:23.892852   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 43/120
	I0729 01:12:24.894122   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 44/120
	I0729 01:12:25.896025   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 45/120
	I0729 01:12:26.897274   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 46/120
	I0729 01:12:27.898585   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 47/120
	I0729 01:12:28.900808   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 48/120
	I0729 01:12:29.902148   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 49/120
	I0729 01:12:30.904009   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 50/120
	I0729 01:12:31.905432   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 51/120
	I0729 01:12:32.907014   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 52/120
	I0729 01:12:33.908462   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 53/120
	I0729 01:12:34.909904   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 54/120
	I0729 01:12:35.911654   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 55/120
	I0729 01:12:36.913314   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 56/120
	I0729 01:12:37.914641   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 57/120
	I0729 01:12:38.916130   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 58/120
	I0729 01:12:39.917564   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 59/120
	I0729 01:12:40.919138   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 60/120
	I0729 01:12:41.920531   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 61/120
	I0729 01:12:42.922291   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 62/120
	I0729 01:12:43.923611   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 63/120
	I0729 01:12:44.925200   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 64/120
	I0729 01:12:45.927006   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 65/120
	I0729 01:12:46.928316   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 66/120
	I0729 01:12:47.929667   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 67/120
	I0729 01:12:48.930986   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 68/120
	I0729 01:12:49.932512   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 69/120
	I0729 01:12:50.934275   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 70/120
	I0729 01:12:51.935766   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 71/120
	I0729 01:12:52.937363   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 72/120
	I0729 01:12:53.938758   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 73/120
	I0729 01:12:54.940193   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 74/120
	I0729 01:12:55.941936   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 75/120
	I0729 01:12:56.943198   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 76/120
	I0729 01:12:57.944592   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 77/120
	I0729 01:12:58.946153   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 78/120
	I0729 01:12:59.947832   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 79/120
	I0729 01:13:00.949502   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 80/120
	I0729 01:13:01.950880   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 81/120
	I0729 01:13:02.952204   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 82/120
	I0729 01:13:03.953616   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 83/120
	I0729 01:13:04.955112   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 84/120
	I0729 01:13:05.957396   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 85/120
	I0729 01:13:06.958758   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 86/120
	I0729 01:13:07.960151   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 87/120
	I0729 01:13:08.961512   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 88/120
	I0729 01:13:09.963026   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 89/120
	I0729 01:13:10.964943   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 90/120
	I0729 01:13:11.966790   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 91/120
	I0729 01:13:12.968447   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 92/120
	I0729 01:13:13.969863   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 93/120
	I0729 01:13:14.971154   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 94/120
	I0729 01:13:15.972913   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 95/120
	I0729 01:13:16.974368   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 96/120
	I0729 01:13:17.975735   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 97/120
	I0729 01:13:18.977614   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 98/120
	I0729 01:13:19.978821   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 99/120
	I0729 01:13:20.980631   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 100/120
	I0729 01:13:21.982624   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 101/120
	I0729 01:13:22.984243   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 102/120
	I0729 01:13:23.985688   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 103/120
	I0729 01:13:24.987150   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 104/120
	I0729 01:13:25.988490   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 105/120
	I0729 01:13:26.990066   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 106/120
	I0729 01:13:27.991406   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 107/120
	I0729 01:13:28.992899   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 108/120
	I0729 01:13:29.994219   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 109/120
	I0729 01:13:30.996430   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 110/120
	I0729 01:13:31.997873   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 111/120
	I0729 01:13:32.999268   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 112/120
	I0729 01:13:34.000757   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 113/120
	I0729 01:13:35.002618   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 114/120
	I0729 01:13:36.004525   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 115/120
	I0729 01:13:37.005869   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 116/120
	I0729 01:13:38.007536   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 117/120
	I0729 01:13:39.009061   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 118/120
	I0729 01:13:40.010291   33413 main.go:141] libmachine: (ha-845088-m03) Waiting for machine to stop 119/120
	I0729 01:13:41.010863   33413 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 01:13:41.010927   33413 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 01:13:41.012937   33413 out.go:177] 
	W0729 01:13:41.014741   33413 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 01:13:41.014757   33413 out.go:239] * 
	* 
	W0729 01:13:41.016890   33413 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 01:13:41.018749   33413 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-845088 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-845088 --wait=true -v=7 --alsologtostderr
E0729 01:16:27.214950   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 01:17:23.071337   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-845088 --wait=true -v=7 --alsologtostderr: (4m5.414673578s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-845088
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-845088 -n ha-845088
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-845088 logs -n 25: (2.125603043s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-845088 cp ha-845088-m03:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m02:/home/docker/cp-test_ha-845088-m03_ha-845088-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088-m02 sudo cat                                          | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m03_ha-845088-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m03:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04:/home/docker/cp-test_ha-845088-m03_ha-845088-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088-m04 sudo cat                                          | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m03_ha-845088-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-845088 cp testdata/cp-test.txt                                                | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2637143725/001/cp-test_ha-845088-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088:/home/docker/cp-test_ha-845088-m04_ha-845088.txt                       |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088 sudo cat                                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m04_ha-845088.txt                                 |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m02:/home/docker/cp-test_ha-845088-m04_ha-845088-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088-m02 sudo cat                                          | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m04_ha-845088-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m03:/home/docker/cp-test_ha-845088-m04_ha-845088-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088-m03 sudo cat                                          | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m04_ha-845088-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-845088 node stop m02 -v=7                                                     | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-845088 node start m02 -v=7                                                    | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-845088 -v=7                                                           | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-845088 -v=7                                                                | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-845088 --wait=true -v=7                                                    | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:13 UTC | 29 Jul 24 01:17 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-845088                                                                | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:17 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 01:13:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 01:13:41.062101   33891 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:13:41.062214   33891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:13:41.062226   33891 out.go:304] Setting ErrFile to fd 2...
	I0729 01:13:41.062232   33891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:13:41.062459   33891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:13:41.063025   33891 out.go:298] Setting JSON to false
	I0729 01:13:41.063961   33891 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3367,"bootTime":1722212254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 01:13:41.064022   33891 start.go:139] virtualization: kvm guest
	I0729 01:13:41.066487   33891 out.go:177] * [ha-845088] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 01:13:41.068271   33891 notify.go:220] Checking for updates...
	I0729 01:13:41.068316   33891 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 01:13:41.070022   33891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 01:13:41.071746   33891 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:13:41.073409   33891 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:13:41.074854   33891 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 01:13:41.076418   33891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 01:13:41.078426   33891 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:13:41.078585   33891 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 01:13:41.079170   33891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:13:41.079218   33891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:13:41.095221   33891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I0729 01:13:41.095614   33891 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:13:41.096111   33891 main.go:141] libmachine: Using API Version  1
	I0729 01:13:41.096155   33891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:13:41.096459   33891 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:13:41.096665   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:13:41.131890   33891 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 01:13:41.133183   33891 start.go:297] selected driver: kvm2
	I0729 01:13:41.133201   33891 start.go:901] validating driver "kvm2" against &{Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.136 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:13:41.133366   33891 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 01:13:41.133748   33891 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:13:41.133843   33891 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 01:13:41.149414   33891 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 01:13:41.150081   33891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 01:13:41.150110   33891 cni.go:84] Creating CNI manager for ""
	I0729 01:13:41.150116   33891 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 01:13:41.150175   33891 start.go:340] cluster config:
	{Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.136 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:13:41.150298   33891 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:13:41.152318   33891 out.go:177] * Starting "ha-845088" primary control-plane node in "ha-845088" cluster
	I0729 01:13:41.153920   33891 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:13:41.153953   33891 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 01:13:41.153962   33891 cache.go:56] Caching tarball of preloaded images
	I0729 01:13:41.154030   33891 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 01:13:41.154039   33891 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 01:13:41.154154   33891 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:13:41.154349   33891 start.go:360] acquireMachinesLock for ha-845088: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 01:13:41.154388   33891 start.go:364] duration metric: took 22.178µs to acquireMachinesLock for "ha-845088"
	I0729 01:13:41.154400   33891 start.go:96] Skipping create...Using existing machine configuration
	I0729 01:13:41.154410   33891 fix.go:54] fixHost starting: 
	I0729 01:13:41.154648   33891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:13:41.154681   33891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:13:41.169657   33891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41299
	I0729 01:13:41.170095   33891 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:13:41.170601   33891 main.go:141] libmachine: Using API Version  1
	I0729 01:13:41.170621   33891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:13:41.170997   33891 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:13:41.171214   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:13:41.171384   33891 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:13:41.173207   33891 fix.go:112] recreateIfNeeded on ha-845088: state=Running err=<nil>
	W0729 01:13:41.173229   33891 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 01:13:41.176294   33891 out.go:177] * Updating the running kvm2 "ha-845088" VM ...
	I0729 01:13:41.177586   33891 machine.go:94] provisionDockerMachine start ...
	I0729 01:13:41.177602   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:13:41.177804   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:13:41.180477   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.180995   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:13:41.181025   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.181203   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:13:41.181386   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.181513   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.181729   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:13:41.181911   33891 main.go:141] libmachine: Using SSH client type: native
	I0729 01:13:41.182085   33891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:13:41.182095   33891 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 01:13:41.288433   33891 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-845088
	
	I0729 01:13:41.288456   33891 main.go:141] libmachine: (ha-845088) Calling .GetMachineName
	I0729 01:13:41.288743   33891 buildroot.go:166] provisioning hostname "ha-845088"
	I0729 01:13:41.288771   33891 main.go:141] libmachine: (ha-845088) Calling .GetMachineName
	I0729 01:13:41.289033   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:13:41.292552   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.293070   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:13:41.293095   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.293360   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:13:41.293567   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.293708   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.293870   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:13:41.294060   33891 main.go:141] libmachine: Using SSH client type: native
	I0729 01:13:41.294244   33891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:13:41.294260   33891 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-845088 && echo "ha-845088" | sudo tee /etc/hostname
	I0729 01:13:41.414837   33891 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-845088
	
	I0729 01:13:41.414865   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:13:41.418065   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.418524   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:13:41.418553   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.418737   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:13:41.418934   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.419134   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.419362   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:13:41.419524   33891 main.go:141] libmachine: Using SSH client type: native
	I0729 01:13:41.419683   33891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:13:41.419708   33891 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-845088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-845088/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-845088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 01:13:41.524747   33891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:13:41.524787   33891 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 01:13:41.524827   33891 buildroot.go:174] setting up certificates
	I0729 01:13:41.524843   33891 provision.go:84] configureAuth start
	I0729 01:13:41.524855   33891 main.go:141] libmachine: (ha-845088) Calling .GetMachineName
	I0729 01:13:41.525113   33891 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:13:41.528098   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.528488   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:13:41.528515   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.528700   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:13:41.531229   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.531672   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:13:41.531697   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.531850   33891 provision.go:143] copyHostCerts
	I0729 01:13:41.531894   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:13:41.531943   33891 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 01:13:41.531959   33891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:13:41.532041   33891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 01:13:41.532151   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:13:41.532178   33891 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 01:13:41.532187   33891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:13:41.532225   33891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 01:13:41.532275   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:13:41.532292   33891 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 01:13:41.532302   33891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:13:41.532326   33891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 01:13:41.532376   33891 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.ha-845088 san=[127.0.0.1 192.168.39.69 ha-845088 localhost minikube]
	I0729 01:13:41.789249   33891 provision.go:177] copyRemoteCerts
	I0729 01:13:41.789301   33891 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 01:13:41.789328   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:13:41.792384   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.792878   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:13:41.792906   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.793193   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:13:41.793396   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.793609   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:13:41.793811   33891 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:13:41.874732   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 01:13:41.874802   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 01:13:41.903865   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 01:13:41.903943   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 01:13:41.929320   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 01:13:41.929386   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 01:13:41.954369   33891 provision.go:87] duration metric: took 429.511564ms to configureAuth
	I0729 01:13:41.954399   33891 buildroot.go:189] setting minikube options for container-runtime
	I0729 01:13:41.954610   33891 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:13:41.954674   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:13:41.957617   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.958005   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:13:41.958025   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.958300   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:13:41.958476   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.958597   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.958722   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:13:41.958862   33891 main.go:141] libmachine: Using SSH client type: native
	I0729 01:13:41.959014   33891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:13:41.959036   33891 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 01:15:12.893835   33891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 01:15:12.893863   33891 machine.go:97] duration metric: took 1m31.71626385s to provisionDockerMachine
	I0729 01:15:12.893879   33891 start.go:293] postStartSetup for "ha-845088" (driver="kvm2")
	I0729 01:15:12.893890   33891 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 01:15:12.893904   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:15:12.894179   33891 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 01:15:12.894208   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:15:12.897243   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:12.897705   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:15:12.897734   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:12.897895   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:15:12.898062   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:15:12.898213   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:15:12.898321   33891 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:15:12.982782   33891 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 01:15:12.987317   33891 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 01:15:12.987347   33891 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 01:15:12.987416   33891 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 01:15:12.987501   33891 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 01:15:12.987512   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /etc/ssl/certs/166232.pem
	I0729 01:15:12.987616   33891 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 01:15:12.997232   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:15:13.022977   33891 start.go:296] duration metric: took 129.083412ms for postStartSetup
	I0729 01:15:13.023085   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:15:13.023384   33891 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 01:15:13.023408   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:15:13.026031   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.026438   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:15:13.026466   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.026683   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:15:13.026875   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:15:13.027071   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:15:13.027215   33891 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	W0729 01:15:13.110767   33891 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 01:15:13.110789   33891 fix.go:56] duration metric: took 1m31.956378994s for fixHost
	I0729 01:15:13.110809   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:15:13.113595   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.113972   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:15:13.113998   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.114223   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:15:13.114390   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:15:13.114536   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:15:13.114704   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:15:13.114991   33891 main.go:141] libmachine: Using SSH client type: native
	I0729 01:15:13.115183   33891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:15:13.115194   33891 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 01:15:13.220049   33891 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722215713.183881719
	
	I0729 01:15:13.220068   33891 fix.go:216] guest clock: 1722215713.183881719
	I0729 01:15:13.220079   33891 fix.go:229] Guest: 2024-07-29 01:15:13.183881719 +0000 UTC Remote: 2024-07-29 01:15:13.110795249 +0000 UTC m=+92.082182863 (delta=73.08647ms)
	I0729 01:15:13.220109   33891 fix.go:200] guest clock delta is within tolerance: 73.08647ms
	I0729 01:15:13.220115   33891 start.go:83] releasing machines lock for "ha-845088", held for 1m32.065718875s
	I0729 01:15:13.220132   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:15:13.220385   33891 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:15:13.223341   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.223785   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:15:13.223816   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.224062   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:15:13.224596   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:15:13.225173   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:15:13.225258   33891 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 01:15:13.225297   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:15:13.225430   33891 ssh_runner.go:195] Run: cat /version.json
	I0729 01:15:13.225451   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:15:13.228170   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.228479   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.228550   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:15:13.228573   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.228748   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:15:13.228921   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:15:13.228934   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:15:13.228955   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.229113   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:15:13.229148   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:15:13.229277   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:15:13.229338   33891 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:15:13.229391   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:15:13.229495   33891 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:15:13.304555   33891 ssh_runner.go:195] Run: systemctl --version
	I0729 01:15:13.328118   33891 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 01:15:13.491415   33891 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 01:15:13.497418   33891 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 01:15:13.497486   33891 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 01:15:13.506688   33891 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 01:15:13.506712   33891 start.go:495] detecting cgroup driver to use...
	I0729 01:15:13.506784   33891 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 01:15:13.524740   33891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 01:15:13.540083   33891 docker.go:217] disabling cri-docker service (if available) ...
	I0729 01:15:13.540134   33891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 01:15:13.555396   33891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 01:15:13.569364   33891 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 01:15:13.777228   33891 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 01:15:13.972708   33891 docker.go:233] disabling docker service ...
	I0729 01:15:13.972785   33891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 01:15:13.993589   33891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 01:15:14.007199   33891 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 01:15:14.149890   33891 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 01:15:14.298387   33891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 01:15:14.312683   33891 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 01:15:14.332228   33891 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 01:15:14.332301   33891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:15:14.343074   33891 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 01:15:14.343142   33891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:15:14.354656   33891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:15:14.366091   33891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:15:14.377262   33891 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 01:15:14.388980   33891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:15:14.400200   33891 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:15:14.412335   33891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:15:14.423015   33891 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 01:15:14.432791   33891 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 01:15:14.442929   33891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:15:14.587778   33891 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 01:15:14.915901   33891 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 01:15:14.915967   33891 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 01:15:14.922014   33891 start.go:563] Will wait 60s for crictl version
	I0729 01:15:14.922069   33891 ssh_runner.go:195] Run: which crictl
	I0729 01:15:14.926216   33891 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 01:15:14.963918   33891 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 01:15:14.963996   33891 ssh_runner.go:195] Run: crio --version
	I0729 01:15:14.997201   33891 ssh_runner.go:195] Run: crio --version
	I0729 01:15:15.030767   33891 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 01:15:15.031982   33891 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:15:15.034562   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:15.035011   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:15:15.035030   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:15.035262   33891 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 01:15:15.040458   33891 kubeadm.go:883] updating cluster {Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.136 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 01:15:15.040644   33891 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:15:15.040707   33891 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:15:15.088900   33891 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 01:15:15.088925   33891 crio.go:433] Images already preloaded, skipping extraction
	I0729 01:15:15.088977   33891 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:15:15.129079   33891 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 01:15:15.129100   33891 cache_images.go:84] Images are preloaded, skipping loading
	I0729 01:15:15.129123   33891 kubeadm.go:934] updating node { 192.168.39.69 8443 v1.30.3 crio true true} ...
	I0729 01:15:15.129244   33891 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-845088 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 01:15:15.129324   33891 ssh_runner.go:195] Run: crio config
	I0729 01:15:15.178413   33891 cni.go:84] Creating CNI manager for ""
	I0729 01:15:15.178434   33891 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 01:15:15.178446   33891 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 01:15:15.178483   33891 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-845088 NodeName:ha-845088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 01:15:15.178655   33891 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-845088"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 01:15:15.178678   33891 kube-vip.go:115] generating kube-vip config ...
	I0729 01:15:15.178734   33891 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 01:15:15.191282   33891 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 01:15:15.191405   33891 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 01:15:15.191456   33891 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 01:15:15.201630   33891 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 01:15:15.201696   33891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 01:15:15.212182   33891 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0729 01:15:15.229290   33891 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 01:15:15.247963   33891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0729 01:15:15.265352   33891 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 01:15:15.282140   33891 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 01:15:15.287505   33891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:15:15.431127   33891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:15:15.446201   33891 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088 for IP: 192.168.39.69
	I0729 01:15:15.446227   33891 certs.go:194] generating shared ca certs ...
	I0729 01:15:15.446244   33891 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:15:15.446389   33891 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 01:15:15.446425   33891 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 01:15:15.446434   33891 certs.go:256] generating profile certs ...
	I0729 01:15:15.446502   33891 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key
	I0729 01:15:15.446528   33891 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.3f3c1d7b
	I0729 01:15:15.446543   33891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.3f3c1d7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.68 192.168.39.243 192.168.39.254]
	I0729 01:15:15.642390   33891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.3f3c1d7b ...
	I0729 01:15:15.642418   33891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.3f3c1d7b: {Name:mk3016e5fa4b796d1cce4dd4d789b10ea203a7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:15:15.642577   33891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.3f3c1d7b ...
	I0729 01:15:15.642588   33891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.3f3c1d7b: {Name:mk6b25a1b73ace691e78b028aa7c87f136b36f3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:15:15.642654   33891 certs.go:381] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.3f3c1d7b -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt
	I0729 01:15:15.642797   33891 certs.go:385] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.3f3c1d7b -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key
	I0729 01:15:15.642923   33891 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key
	I0729 01:15:15.642937   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 01:15:15.642949   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 01:15:15.642959   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 01:15:15.642970   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 01:15:15.642987   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 01:15:15.643000   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 01:15:15.643015   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 01:15:15.643024   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 01:15:15.643094   33891 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 01:15:15.643122   33891 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 01:15:15.643131   33891 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 01:15:15.643151   33891 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 01:15:15.643171   33891 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 01:15:15.643191   33891 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 01:15:15.643224   33891 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:15:15.643248   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /usr/share/ca-certificates/166232.pem
	I0729 01:15:15.643260   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:15:15.643271   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem -> /usr/share/ca-certificates/16623.pem
	I0729 01:15:15.643823   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 01:15:15.674022   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 01:15:15.698135   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 01:15:15.721436   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 01:15:15.745227   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 01:15:15.768851   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 01:15:15.793098   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 01:15:15.817667   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 01:15:15.842089   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 01:15:15.867130   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 01:15:15.891233   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 01:15:15.914192   33891 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 01:15:15.930927   33891 ssh_runner.go:195] Run: openssl version
	I0729 01:15:15.937005   33891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 01:15:15.948070   33891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 01:15:15.952628   33891 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 01:15:15.952670   33891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 01:15:15.958337   33891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 01:15:15.968779   33891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 01:15:15.979994   33891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:15:15.984574   33891 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:15:15.984641   33891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:15:15.990404   33891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 01:15:15.999963   33891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 01:15:16.010817   33891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 01:15:16.015215   33891 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 01:15:16.015265   33891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 01:15:16.021169   33891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 01:15:16.031224   33891 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 01:15:16.035668   33891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 01:15:16.041397   33891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 01:15:16.047198   33891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 01:15:16.052807   33891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 01:15:16.058641   33891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 01:15:16.065264   33891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 01:15:16.071014   33891 kubeadm.go:392] StartCluster: {Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.136 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:15:16.071202   33891 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 01:15:16.071277   33891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 01:15:16.117465   33891 cri.go:89] found id: "58a549d922e71ebab3f11fecd9f4cce112027221053d88dc2f10005406e0f06a"
	I0729 01:15:16.117486   33891 cri.go:89] found id: "2dbd4d5717dd2a71e5dc834f2c702d6cdac2ec32e11fdc8b865b3c82760aaf83"
	I0729 01:15:16.117491   33891 cri.go:89] found id: "a741926a458e961c77263e131b90a29509b116bec3e6d34bb71176d3991ca8b1"
	I0729 01:15:16.117494   33891 cri.go:89] found id: "dd54eae7304e5182e5293704abdceb4e9ffd712fa08fad6b3d967463872f0eec"
	I0729 01:15:16.117497   33891 cri.go:89] found id: "102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6"
	I0729 01:15:16.117500   33891 cri.go:89] found id: "4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87"
	I0729 01:15:16.117503   33891 cri.go:89] found id: "b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198"
	I0729 01:15:16.117505   33891 cri.go:89] found id: "ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8"
	I0729 01:15:16.117508   33891 cri.go:89] found id: "994e26254fd085e2926edf9c656aad1b17c748a39170b459396f42bc335f1b37"
	I0729 01:15:16.117513   33891 cri.go:89] found id: "2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46"
	I0729 01:15:16.117515   33891 cri.go:89] found id: "71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60"
	I0729 01:15:16.117518   33891 cri.go:89] found id: "2f0d5f5418f21962309391e2fc61b9ab31ab12afa2e057a4a8bbecf46d934d4c"
	I0729 01:15:16.117520   33891 cri.go:89] found id: "32f40f9b4c14412e1f58e289c0f05c0df36143bb9d0e662b8e6a5ab96bc84ff5"
	I0729 01:15:16.117523   33891 cri.go:89] found id: ""
	I0729 01:15:16.117568   33891 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.465264264Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5dfa5ddf-9dae-435c-9f27-037da026e350 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.466795162Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5395fbe-8eed-4c76-a5a5-6ce4c690e994 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.467872458Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722215867467841192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5395fbe-8eed-4c76-a5a5-6ce4c690e994 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.468870042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de90d027-8d2c-46bd-a114-a4d6119328c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.468943104Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de90d027-8d2c-46bd-a114-a4d6119328c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.469470460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e41cca145ab253a77954971d769c9317b115b07993e26b8822e377cd5e4b470,PodSandboxId:325ddf55307428bba049828355bb4f3a8da7d2674b4084d2fe49431592df6ab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722215791131978924,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae848e053a413d6390d563c86e66749f80257ee3338a05474d65c7fe52e17a2,PodSandboxId:f1bfea814196944140223e82dce8f5d94f8da31f83619d64bdbe9d48b76a3d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722215765144806799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a3220dc04b24fbdf00cd236d9a94f8a9523d2f00a3de205e6a608590ddc250,PodSandboxId:fe384baae5f62d9a89cd5161d421dac65e0059cdbe77901e3a4ffb055f7cdc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722215760135714514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817587b77a3ed9265060f97d06c6e55e59c753517dab115f90b210a4d8d4b251,PodSandboxId:e390f2207379f03c434dae5689092c14404b9f9dedbae02015290aca0b8562e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722215752427234696,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annotations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea69b0d5cde22d017f386be1db032ca47eb2c7bdf0c86ee668e1f85c517c3f,PodSandboxId:325ddf55307428bba049828355bb4f3a8da7d2674b4084d2fe49431592df6ab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722215749137128641,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7261b3d0b0caa43d986b0f4aaaa477c3df3dcc59f11701bf55932227ce247b51,PodSandboxId:bd91aa82fefd6dbf7c1924ee2a0fb99798589d5cb7ba93f33537a2e0b3a7bd84,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722215735928395354,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d75d1a8d19882beac04fd6b3dc845a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2a4bee1eb8bca9612f379d84734924c3cc6c1e36233455e8b21499759ad1553,PodSandboxId:cf9d25c8a060c7013be99f7c540b685b3794ee05445bca8ecbf41a8a58854589,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215719779592129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5540fc40e2a7f15b320e56377d094ce83a31f00a2df550fc0c5a34c0a6b53f67,PodSandboxId:ee2397a596835fd0cffabe01aa3c227f7fe3a3e52ea18d69efa156701a52a597,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722215719536943430,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca67b68988769cf0f2629127e88cb8e28d64711c5477e47cbd0260940c95451,PodSandboxId:a9afc28c0b39e871ded2b32cb858626b1742e558e8b1f6a4dba078ba1e4a6c6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722215719296404215,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:416edea5a4ef1589f0f30884e6c1c4c26063ba7ccc13ee4f90d22801464de2ca,PodSandboxId:af895d5082b723f46c1f5697e1281e534712108be9a26161e8b1e4ec797e625e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722215719057892634,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:578dedb8fb465b5c6d85b39b06bd40cf3b76aa6df602d96f9a0bd1167fa5a59c,PodSandboxId:dec544e388e32a7662552f6eea42e54b2a111ef9ad05971e894657d9c226e709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215719220467091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kubernetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5792cd9b8f1980e8adbf4a5b3167ab46c050a8ff4c196487e0288fcb3a808571,PodSandboxId:f1bfea814196944140223e82dce8f5d94f8da31f83619d64bdbe9d48b76a3d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722215719204090539,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98efb6dd5b438577ccc769b4d9d48b9c0c7166de239d9e7f38d2eda3fc94b140,PodSandboxId:6d3182746ca8253a7f59c133facddefc9d27bc3907d151b65a8d3743f6ee3f29,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722215719098835462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d54
1412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d805fa439728f540801adc68dca53128909d2149cfbc2c7c5e877d34560ae3e0,PodSandboxId:fe384baae5f62d9a89cd5161d421dac65e0059cdbe77901e3a4ffb055f7cdc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722215718968496683,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:393f89e96685f53ad45043741e5cdeea2a14ac868361b8ec5d1c99fb7fcb80fd,PodSandboxId:077fc92624630d9345f559e83fcc88623c9c9da78c83f2fd03558dbe231bf392,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722215220872200113,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annot
ations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6,PodSandboxId:860aff47921080f197906689ebdac24d8f2d07ce79c9792da378416aeb0b0556,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722215067520196408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kube
rnetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87,PodSandboxId:5998a0c18499b323d8b2f065294e71b0f1b83d8d7e0689683aa373fd912f2676,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722215067480537315,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198,PodSandboxId:d036858417b617bd3d07094718128ed94a829b79a04481e222a4d007a8cced8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722215055323459428,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8,PodSandboxId:a37edf1e80380d902c014ad30352a41536c6dd919531118f5bfdff6b318b36b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722215050132752424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46,PodSandboxId:00d828e6fd11cbd1fb3e98ce4070370f2935ac47836270d51eb66a8b845ac201,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722215029963625862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60,PodSandboxId:64651fd976b6f146df0a71675e4e22c563cd375d3f5da24cf2a480bc054c63af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722215029937553090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de90d027-8d2c-46bd-a114-a4d6119328c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.518498771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0fe6898-e257-47a8-b4c2-b373c93bbe18 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.518612558Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0fe6898-e257-47a8-b4c2-b373c93bbe18 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.520250964Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af9b3ac7-e926-40cd-a070-d9919d8732a0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.520842549Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722215867520812083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af9b3ac7-e926-40cd-a070-d9919d8732a0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.521349752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e9122a7-754c-4483-bdec-a7916abab512 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.521443148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e9122a7-754c-4483-bdec-a7916abab512 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.522081400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e41cca145ab253a77954971d769c9317b115b07993e26b8822e377cd5e4b470,PodSandboxId:325ddf55307428bba049828355bb4f3a8da7d2674b4084d2fe49431592df6ab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722215791131978924,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae848e053a413d6390d563c86e66749f80257ee3338a05474d65c7fe52e17a2,PodSandboxId:f1bfea814196944140223e82dce8f5d94f8da31f83619d64bdbe9d48b76a3d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722215765144806799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a3220dc04b24fbdf00cd236d9a94f8a9523d2f00a3de205e6a608590ddc250,PodSandboxId:fe384baae5f62d9a89cd5161d421dac65e0059cdbe77901e3a4ffb055f7cdc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722215760135714514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817587b77a3ed9265060f97d06c6e55e59c753517dab115f90b210a4d8d4b251,PodSandboxId:e390f2207379f03c434dae5689092c14404b9f9dedbae02015290aca0b8562e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722215752427234696,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annotations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea69b0d5cde22d017f386be1db032ca47eb2c7bdf0c86ee668e1f85c517c3f,PodSandboxId:325ddf55307428bba049828355bb4f3a8da7d2674b4084d2fe49431592df6ab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722215749137128641,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7261b3d0b0caa43d986b0f4aaaa477c3df3dcc59f11701bf55932227ce247b51,PodSandboxId:bd91aa82fefd6dbf7c1924ee2a0fb99798589d5cb7ba93f33537a2e0b3a7bd84,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722215735928395354,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d75d1a8d19882beac04fd6b3dc845a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2a4bee1eb8bca9612f379d84734924c3cc6c1e36233455e8b21499759ad1553,PodSandboxId:cf9d25c8a060c7013be99f7c540b685b3794ee05445bca8ecbf41a8a58854589,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215719779592129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5540fc40e2a7f15b320e56377d094ce83a31f00a2df550fc0c5a34c0a6b53f67,PodSandboxId:ee2397a596835fd0cffabe01aa3c227f7fe3a3e52ea18d69efa156701a52a597,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722215719536943430,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca67b68988769cf0f2629127e88cb8e28d64711c5477e47cbd0260940c95451,PodSandboxId:a9afc28c0b39e871ded2b32cb858626b1742e558e8b1f6a4dba078ba1e4a6c6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722215719296404215,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:416edea5a4ef1589f0f30884e6c1c4c26063ba7ccc13ee4f90d22801464de2ca,PodSandboxId:af895d5082b723f46c1f5697e1281e534712108be9a26161e8b1e4ec797e625e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722215719057892634,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:578dedb8fb465b5c6d85b39b06bd40cf3b76aa6df602d96f9a0bd1167fa5a59c,PodSandboxId:dec544e388e32a7662552f6eea42e54b2a111ef9ad05971e894657d9c226e709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215719220467091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kubernetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5792cd9b8f1980e8adbf4a5b3167ab46c050a8ff4c196487e0288fcb3a808571,PodSandboxId:f1bfea814196944140223e82dce8f5d94f8da31f83619d64bdbe9d48b76a3d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722215719204090539,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98efb6dd5b438577ccc769b4d9d48b9c0c7166de239d9e7f38d2eda3fc94b140,PodSandboxId:6d3182746ca8253a7f59c133facddefc9d27bc3907d151b65a8d3743f6ee3f29,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722215719098835462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d54
1412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d805fa439728f540801adc68dca53128909d2149cfbc2c7c5e877d34560ae3e0,PodSandboxId:fe384baae5f62d9a89cd5161d421dac65e0059cdbe77901e3a4ffb055f7cdc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722215718968496683,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:393f89e96685f53ad45043741e5cdeea2a14ac868361b8ec5d1c99fb7fcb80fd,PodSandboxId:077fc92624630d9345f559e83fcc88623c9c9da78c83f2fd03558dbe231bf392,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722215220872200113,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annot
ations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6,PodSandboxId:860aff47921080f197906689ebdac24d8f2d07ce79c9792da378416aeb0b0556,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722215067520196408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kube
rnetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87,PodSandboxId:5998a0c18499b323d8b2f065294e71b0f1b83d8d7e0689683aa373fd912f2676,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722215067480537315,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198,PodSandboxId:d036858417b617bd3d07094718128ed94a829b79a04481e222a4d007a8cced8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722215055323459428,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8,PodSandboxId:a37edf1e80380d902c014ad30352a41536c6dd919531118f5bfdff6b318b36b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722215050132752424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46,PodSandboxId:00d828e6fd11cbd1fb3e98ce4070370f2935ac47836270d51eb66a8b845ac201,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722215029963625862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60,PodSandboxId:64651fd976b6f146df0a71675e4e22c563cd375d3f5da24cf2a480bc054c63af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722215029937553090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e9122a7-754c-4483-bdec-a7916abab512 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.547728404Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=8841f932-c423-40dc-9985-d839d7e3193a name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.551820927Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e390f2207379f03c434dae5689092c14404b9f9dedbae02015290aca0b8562e0,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-kdxhf,Uid:3d626cc7-0294-43eb-903b-83ee7ea03f3d,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722215752285418099,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T01:06:57.611733935Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bd91aa82fefd6dbf7c1924ee2a0fb99798589d5cb7ba93f33537a2e0b3a7bd84,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-845088,Uid:2d75d1a8d19882beac04fd6b3dc845a0,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1722215735824673836,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d75d1a8d19882beac04fd6b3dc845a0,},Annotations:map[string]string{kubernetes.io/config.hash: 2d75d1a8d19882beac04fd6b3dc845a0,kubernetes.io/config.seen: 2024-07-29T01:15:15.247497166Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cf9d25c8a060c7013be99f7c540b685b3794ee05445bca8ecbf41a8a58854589,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-x4jjj,Uid:659a9fc3-a597-401d-9ceb-71a04f049d8c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722215718694760681,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07
-29T01:04:26.949120573Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dec544e388e32a7662552f6eea42e54b2a111ef9ad05971e894657d9c226e709,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-26phs,Uid:0fa00166-935c-4e30-899d-0ae105083984,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722215718606565324,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T01:04:26.958295047Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6d3182746ca8253a7f59c133facddefc9d27bc3907d151b65a8d3743f6ee3f29,Metadata:&PodSandboxMetadata{Name:etcd-ha-845088,Uid:b06d8c918adf1d541412dd0e3ab48df0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722215718588397708,Labels:map[string]string{componen
t: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.69:2379,kubernetes.io/config.hash: b06d8c918adf1d541412dd0e3ab48df0,kubernetes.io/config.seen: 2024-07-29T01:03:56.050661312Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:af895d5082b723f46c1f5697e1281e534712108be9a26161e8b1e4ec797e625e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-845088,Uid:83f94015277f1fa93b4433220cb8f47a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722215718585770511,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,tier: control-plane,},Annotations:map[string]string{ku
bernetes.io/config.hash: 83f94015277f1fa93b4433220cb8f47a,kubernetes.io/config.seen: 2024-07-29T01:03:56.050666485Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ee2397a596835fd0cffabe01aa3c227f7fe3a3e52ea18d69efa156701a52a597,Metadata:&PodSandboxMetadata{Name:kindnet-jz7gr,Uid:3d184fd2-5bfc-40bd-b7b3-98934d58a689,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722215718578637820,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T01:04:08.679994515Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a9afc28c0b39e871ded2b32cb858626b1742e558e8b1f6a4dba078ba1e4a6c6b,Metadata:&PodSandboxMetadata{Name:kube-proxy-tmzt7,Uid:f2e92bb0-87c0-4d4e-a
e34-d67970a61dc9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722215718554092189,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T01:04:08.674925989Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f1bfea814196944140223e82dce8f5d94f8da31f83619d64bdbe9d48b76a3d4a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-845088,Uid:8a82577ef7e027cb45d5457528698a5d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722215718553304446,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a82577ef7
e027cb45d5457528698a5d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8a82577ef7e027cb45d5457528698a5d,kubernetes.io/config.seen: 2024-07-29T01:03:56.050665722Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:325ddf55307428bba049828355bb4f3a8da7d2674b4084d2fe49431592df6ab6,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9b770bc2-7368-4b86-89ff-399d60f17817,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722215718542579346,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-te
st\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T01:04:26.964098831Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe384baae5f62d9a89cd5161d421dac65e0059cdbe77901e3a4ffb055f7cdc12,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-845088,Uid:2688c12ddc0a5ab7af0b9dd884185c58,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722215718536789299,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-845088,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.69:8443,kubernetes.io/config.hash: 2688c12ddc0a5ab7af0b9dd884185c58,kubernetes.io/config.seen: 2024-07-29T01:03:56.050664679Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:077fc92624630d9345f559e83fcc88623c9c9da78c83f2fd03558dbe231bf392,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-kdxhf,Uid:3d626cc7-0294-43eb-903b-83ee7ea03f3d,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722215217926903626,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T01:06:57.611733935Z,kubernetes.io/config.source:
api,},RuntimeHandler:,},&PodSandbox{Id:860aff47921080f197906689ebdac24d8f2d07ce79c9792da378416aeb0b0556,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-26phs,Uid:0fa00166-935c-4e30-899d-0ae105083984,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722215067264880076,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T01:04:26.958295047Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5998a0c18499b323d8b2f065294e71b0f1b83d8d7e0689683aa373fd912f2676,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-x4jjj,Uid:659a9fc3-a597-401d-9ceb-71a04f049d8c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722215067259651871,Labels:map[string]string{io.kubernetes.container.name: POD,io.kube
rnetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T01:04:26.949120573Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a37edf1e80380d902c014ad30352a41536c6dd919531118f5bfdff6b318b36b3,Metadata:&PodSandboxMetadata{Name:kube-proxy-tmzt7,Uid:f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722215049891076161,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T01:04:08.674925989Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&Po
dSandbox{Id:d036858417b617bd3d07094718128ed94a829b79a04481e222a4d007a8cced8a,Metadata:&PodSandboxMetadata{Name:kindnet-jz7gr,Uid:3d184fd2-5bfc-40bd-b7b3-98934d58a689,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722215049890438690,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T01:04:08.679994515Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64651fd976b6f146df0a71675e4e22c563cd375d3f5da24cf2a480bc054c63af,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-845088,Uid:83f94015277f1fa93b4433220cb8f47a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722215029671566311,Labels:map[string]string{component: kube-scheduler,io.kubernet
es.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 83f94015277f1fa93b4433220cb8f47a,kubernetes.io/config.seen: 2024-07-29T01:03:49.192825665Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:00d828e6fd11cbd1fb3e98ce4070370f2935ac47836270d51eb66a8b845ac201,Metadata:&PodSandboxMetadata{Name:etcd-ha-845088,Uid:b06d8c918adf1d541412dd0e3ab48df0,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722215029663529321,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.69:2379,kubernetes.io/config.hash: b06d8c918ad
f1d541412dd0e3ab48df0,kubernetes.io/config.seen: 2024-07-29T01:03:49.192819079Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=8841f932-c423-40dc-9985-d839d7e3193a name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.553919630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80842294-00d9-4595-ab2e-9a269e46327c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.554059732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80842294-00d9-4595-ab2e-9a269e46327c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.555949175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e41cca145ab253a77954971d769c9317b115b07993e26b8822e377cd5e4b470,PodSandboxId:325ddf55307428bba049828355bb4f3a8da7d2674b4084d2fe49431592df6ab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722215791131978924,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae848e053a413d6390d563c86e66749f80257ee3338a05474d65c7fe52e17a2,PodSandboxId:f1bfea814196944140223e82dce8f5d94f8da31f83619d64bdbe9d48b76a3d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722215765144806799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a3220dc04b24fbdf00cd236d9a94f8a9523d2f00a3de205e6a608590ddc250,PodSandboxId:fe384baae5f62d9a89cd5161d421dac65e0059cdbe77901e3a4ffb055f7cdc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722215760135714514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817587b77a3ed9265060f97d06c6e55e59c753517dab115f90b210a4d8d4b251,PodSandboxId:e390f2207379f03c434dae5689092c14404b9f9dedbae02015290aca0b8562e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722215752427234696,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annotations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea69b0d5cde22d017f386be1db032ca47eb2c7bdf0c86ee668e1f85c517c3f,PodSandboxId:325ddf55307428bba049828355bb4f3a8da7d2674b4084d2fe49431592df6ab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722215749137128641,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7261b3d0b0caa43d986b0f4aaaa477c3df3dcc59f11701bf55932227ce247b51,PodSandboxId:bd91aa82fefd6dbf7c1924ee2a0fb99798589d5cb7ba93f33537a2e0b3a7bd84,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722215735928395354,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d75d1a8d19882beac04fd6b3dc845a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2a4bee1eb8bca9612f379d84734924c3cc6c1e36233455e8b21499759ad1553,PodSandboxId:cf9d25c8a060c7013be99f7c540b685b3794ee05445bca8ecbf41a8a58854589,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215719779592129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5540fc40e2a7f15b320e56377d094ce83a31f00a2df550fc0c5a34c0a6b53f67,PodSandboxId:ee2397a596835fd0cffabe01aa3c227f7fe3a3e52ea18d69efa156701a52a597,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722215719536943430,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca67b68988769cf0f2629127e88cb8e28d64711c5477e47cbd0260940c95451,PodSandboxId:a9afc28c0b39e871ded2b32cb858626b1742e558e8b1f6a4dba078ba1e4a6c6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722215719296404215,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:416edea5a4ef1589f0f30884e6c1c4c26063ba7ccc13ee4f90d22801464de2ca,PodSandboxId:af895d5082b723f46c1f5697e1281e534712108be9a26161e8b1e4ec797e625e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722215719057892634,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:578dedb8fb465b5c6d85b39b06bd40cf3b76aa6df602d96f9a0bd1167fa5a59c,PodSandboxId:dec544e388e32a7662552f6eea42e54b2a111ef9ad05971e894657d9c226e709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215719220467091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kubernetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5792cd9b8f1980e8adbf4a5b3167ab46c050a8ff4c196487e0288fcb3a808571,PodSandboxId:f1bfea814196944140223e82dce8f5d94f8da31f83619d64bdbe9d48b76a3d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722215719204090539,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98efb6dd5b438577ccc769b4d9d48b9c0c7166de239d9e7f38d2eda3fc94b140,PodSandboxId:6d3182746ca8253a7f59c133facddefc9d27bc3907d151b65a8d3743f6ee3f29,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722215719098835462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d54
1412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d805fa439728f540801adc68dca53128909d2149cfbc2c7c5e877d34560ae3e0,PodSandboxId:fe384baae5f62d9a89cd5161d421dac65e0059cdbe77901e3a4ffb055f7cdc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722215718968496683,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:393f89e96685f53ad45043741e5cdeea2a14ac868361b8ec5d1c99fb7fcb80fd,PodSandboxId:077fc92624630d9345f559e83fcc88623c9c9da78c83f2fd03558dbe231bf392,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722215220872200113,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annot
ations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6,PodSandboxId:860aff47921080f197906689ebdac24d8f2d07ce79c9792da378416aeb0b0556,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722215067520196408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kube
rnetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87,PodSandboxId:5998a0c18499b323d8b2f065294e71b0f1b83d8d7e0689683aa373fd912f2676,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722215067480537315,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198,PodSandboxId:d036858417b617bd3d07094718128ed94a829b79a04481e222a4d007a8cced8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722215055323459428,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8,PodSandboxId:a37edf1e80380d902c014ad30352a41536c6dd919531118f5bfdff6b318b36b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722215050132752424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46,PodSandboxId:00d828e6fd11cbd1fb3e98ce4070370f2935ac47836270d51eb66a8b845ac201,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722215029963625862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60,PodSandboxId:64651fd976b6f146df0a71675e4e22c563cd375d3f5da24cf2a480bc054c63af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722215029937553090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80842294-00d9-4595-ab2e-9a269e46327c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.571300605Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21cdb010-24d7-4430-8849-499602f5e73d name=/runtime.v1.RuntimeService/Version
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.571403360Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21cdb010-24d7-4430-8849-499602f5e73d name=/runtime.v1.RuntimeService/Version
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.572865216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3bf5a0fb-50a0-4fb4-8f7e-ea413ec74801 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.573648319Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722215867573617909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3bf5a0fb-50a0-4fb4-8f7e-ea413ec74801 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.575488482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2527307-a644-4e16-aa12-e393448fc595 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.575789198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2527307-a644-4e16-aa12-e393448fc595 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:17:47 ha-845088 crio[3820]: time="2024-07-29 01:17:47.577433865Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e41cca145ab253a77954971d769c9317b115b07993e26b8822e377cd5e4b470,PodSandboxId:325ddf55307428bba049828355bb4f3a8da7d2674b4084d2fe49431592df6ab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722215791131978924,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae848e053a413d6390d563c86e66749f80257ee3338a05474d65c7fe52e17a2,PodSandboxId:f1bfea814196944140223e82dce8f5d94f8da31f83619d64bdbe9d48b76a3d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722215765144806799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a3220dc04b24fbdf00cd236d9a94f8a9523d2f00a3de205e6a608590ddc250,PodSandboxId:fe384baae5f62d9a89cd5161d421dac65e0059cdbe77901e3a4ffb055f7cdc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722215760135714514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817587b77a3ed9265060f97d06c6e55e59c753517dab115f90b210a4d8d4b251,PodSandboxId:e390f2207379f03c434dae5689092c14404b9f9dedbae02015290aca0b8562e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722215752427234696,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annotations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea69b0d5cde22d017f386be1db032ca47eb2c7bdf0c86ee668e1f85c517c3f,PodSandboxId:325ddf55307428bba049828355bb4f3a8da7d2674b4084d2fe49431592df6ab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722215749137128641,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7261b3d0b0caa43d986b0f4aaaa477c3df3dcc59f11701bf55932227ce247b51,PodSandboxId:bd91aa82fefd6dbf7c1924ee2a0fb99798589d5cb7ba93f33537a2e0b3a7bd84,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722215735928395354,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d75d1a8d19882beac04fd6b3dc845a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2a4bee1eb8bca9612f379d84734924c3cc6c1e36233455e8b21499759ad1553,PodSandboxId:cf9d25c8a060c7013be99f7c540b685b3794ee05445bca8ecbf41a8a58854589,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215719779592129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5540fc40e2a7f15b320e56377d094ce83a31f00a2df550fc0c5a34c0a6b53f67,PodSandboxId:ee2397a596835fd0cffabe01aa3c227f7fe3a3e52ea18d69efa156701a52a597,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722215719536943430,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca67b68988769cf0f2629127e88cb8e28d64711c5477e47cbd0260940c95451,PodSandboxId:a9afc28c0b39e871ded2b32cb858626b1742e558e8b1f6a4dba078ba1e4a6c6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722215719296404215,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:416edea5a4ef1589f0f30884e6c1c4c26063ba7ccc13ee4f90d22801464de2ca,PodSandboxId:af895d5082b723f46c1f5697e1281e534712108be9a26161e8b1e4ec797e625e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722215719057892634,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:578dedb8fb465b5c6d85b39b06bd40cf3b76aa6df602d96f9a0bd1167fa5a59c,PodSandboxId:dec544e388e32a7662552f6eea42e54b2a111ef9ad05971e894657d9c226e709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215719220467091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kubernetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5792cd9b8f1980e8adbf4a5b3167ab46c050a8ff4c196487e0288fcb3a808571,PodSandboxId:f1bfea814196944140223e82dce8f5d94f8da31f83619d64bdbe9d48b76a3d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722215719204090539,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98efb6dd5b438577ccc769b4d9d48b9c0c7166de239d9e7f38d2eda3fc94b140,PodSandboxId:6d3182746ca8253a7f59c133facddefc9d27bc3907d151b65a8d3743f6ee3f29,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722215719098835462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d54
1412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d805fa439728f540801adc68dca53128909d2149cfbc2c7c5e877d34560ae3e0,PodSandboxId:fe384baae5f62d9a89cd5161d421dac65e0059cdbe77901e3a4ffb055f7cdc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722215718968496683,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:393f89e96685f53ad45043741e5cdeea2a14ac868361b8ec5d1c99fb7fcb80fd,PodSandboxId:077fc92624630d9345f559e83fcc88623c9c9da78c83f2fd03558dbe231bf392,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722215220872200113,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annot
ations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6,PodSandboxId:860aff47921080f197906689ebdac24d8f2d07ce79c9792da378416aeb0b0556,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722215067520196408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kube
rnetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87,PodSandboxId:5998a0c18499b323d8b2f065294e71b0f1b83d8d7e0689683aa373fd912f2676,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722215067480537315,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198,PodSandboxId:d036858417b617bd3d07094718128ed94a829b79a04481e222a4d007a8cced8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722215055323459428,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8,PodSandboxId:a37edf1e80380d902c014ad30352a41536c6dd919531118f5bfdff6b318b36b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722215050132752424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46,PodSandboxId:00d828e6fd11cbd1fb3e98ce4070370f2935ac47836270d51eb66a8b845ac201,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722215029963625862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60,PodSandboxId:64651fd976b6f146df0a71675e4e22c563cd375d3f5da24cf2a480bc054c63af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722215029937553090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2527307-a644-4e16-aa12-e393448fc595 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4e41cca145ab2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   325ddf5530742       storage-provisioner
	6ae848e053a41       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   f1bfea8141969       kube-controller-manager-ha-845088
	c6a3220dc04b2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   fe384baae5f62       kube-apiserver-ha-845088
	817587b77a3ed       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   e390f2207379f       busybox-fc5497c4f-kdxhf
	7cea69b0d5cde       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   325ddf5530742       storage-provisioner
	7261b3d0b0caa       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   bd91aa82fefd6       kube-vip-ha-845088
	b2a4bee1eb8bc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   cf9d25c8a060c       coredns-7db6d8ff4d-x4jjj
	5540fc40e2a7f       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   ee2397a596835       kindnet-jz7gr
	8ca67b6898876       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   a9afc28c0b39e       kube-proxy-tmzt7
	578dedb8fb465       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   dec544e388e32       coredns-7db6d8ff4d-26phs
	5792cd9b8f198       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   f1bfea8141969       kube-controller-manager-ha-845088
	98efb6dd5b438       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   6d3182746ca82       etcd-ha-845088
	416edea5a4ef1       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   af895d5082b72       kube-scheduler-ha-845088
	d805fa439728f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   fe384baae5f62       kube-apiserver-ha-845088
	393f89e96685f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   077fc92624630       busybox-fc5497c4f-kdxhf
	102a2205a11ac       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   860aff4792108       coredns-7db6d8ff4d-26phs
	4c9a1e2ce8399       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   5998a0c18499b       coredns-7db6d8ff4d-x4jjj
	b117823d9ea03       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   d036858417b61       kindnet-jz7gr
	ba58523a71dfb       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   a37edf1e80380       kube-proxy-tmzt7
	2d545f40bcf5d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   00d828e6fd11c       etcd-ha-845088
	71cb29192a2ff       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago       Exited              kube-scheduler            0                   64651fd976b6f       kube-scheduler-ha-845088
	
	
	==> coredns [102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6] <==
	[INFO] 10.244.0.4:56145 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107111s
	[INFO] 10.244.0.4:49547 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013737s
	[INFO] 10.244.2.2:50551 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157425s
	[INFO] 10.244.2.2:54720 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002000849s
	[INFO] 10.244.2.2:46977 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133922s
	[INFO] 10.244.2.2:52278 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098427s
	[INFO] 10.244.2.2:33523 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166768s
	[INFO] 10.244.2.2:56762 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127309s
	[INFO] 10.244.1.2:60690 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162836s
	[INFO] 10.244.0.4:53481 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125124s
	[INFO] 10.244.0.4:36302 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006046s
	[INFO] 10.244.2.2:51131 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200754s
	[INFO] 10.244.2.2:35216 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135186s
	[INFO] 10.244.2.2:47188 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095941s
	[INFO] 10.244.2.2:45175 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088023s
	[INFO] 10.244.1.2:53946 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000271227s
	[INFO] 10.244.0.4:35507 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089711s
	[INFO] 10.244.0.4:48138 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000191709s
	[INFO] 10.244.2.2:46681 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084718s
	[INFO] 10.244.2.2:58403 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000190529s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87] <==
	[INFO] 10.244.1.2:54896 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000275281s
	[INFO] 10.244.1.2:36709 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149351s
	[INFO] 10.244.1.2:35599 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014616s
	[INFO] 10.244.1.2:40232 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145035s
	[INFO] 10.244.0.4:42879 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002077041s
	[INFO] 10.244.0.4:46236 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001377262s
	[INFO] 10.244.2.2:60143 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018397s
	[INFO] 10.244.2.2:33059 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001229041s
	[INFO] 10.244.1.2:50949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114887s
	[INFO] 10.244.1.2:41895 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099234s
	[INFO] 10.244.1.2:57885 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008087s
	[INFO] 10.244.0.4:46809 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000202377s
	[INFO] 10.244.0.4:54702 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067695s
	[INFO] 10.244.1.2:33676 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193639s
	[INFO] 10.244.1.2:35018 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014376s
	[INFO] 10.244.1.2:58362 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164011s
	[INFO] 10.244.0.4:42745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108289s
	[INFO] 10.244.0.4:38059 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000080482s
	[INFO] 10.244.2.2:57416 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132756s
	[INFO] 10.244.2.2:34696 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000282968s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [578dedb8fb465b5c6d85b39b06bd40cf3b76aa6df602d96f9a0bd1167fa5a59c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41280->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1018947224]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:15:31.266) (total time: 10505ms):
	Trace[1018947224]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41280->10.96.0.1:443: read: connection reset by peer 10504ms (01:15:41.771)
	Trace[1018947224]: [10.505018215s] [10.505018215s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41280->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b2a4bee1eb8bca9612f379d84734924c3cc6c1e36233455e8b21499759ad1553] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42726->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42726->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42724->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1836866381]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:15:31.698) (total time: 10072ms):
	Trace[1836866381]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42724->10.96.0.1:443: read: connection reset by peer 10072ms (01:15:41.770)
	Trace[1836866381]: [10.072483696s] [10.072483696s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42724->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-845088
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-845088
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=ha-845088
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T01_03_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:03:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-845088
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:17:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:16:03 +0000   Mon, 29 Jul 2024 01:03:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:16:03 +0000   Mon, 29 Jul 2024 01:03:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:16:03 +0000   Mon, 29 Jul 2024 01:03:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:16:03 +0000   Mon, 29 Jul 2024 01:04:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    ha-845088
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbb04d72e92946e88c1da68d30c7bef3
	  System UUID:                fbb04d72-e929-46e8-8c1d-a68d30c7bef3
	  Boot ID:                    8609abf0-fb2f-4316-bc25-edde00b876e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kdxhf              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-26phs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-x4jjj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-845088                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-jz7gr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-845088             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-845088    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-tmzt7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-845088             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-845088                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 103s                   kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-845088 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-845088 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-845088 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-845088 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	  Warning  ContainerGCFailed        2m52s (x2 over 3m52s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           100s                   node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	  Normal   RegisteredNode           92s                    node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	  Normal   RegisteredNode           27s                    node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	
	
	Name:               ha-845088-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-845088-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=ha-845088
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T01_05_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:05:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-845088-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:17:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:16:46 +0000   Mon, 29 Jul 2024 01:16:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:16:46 +0000   Mon, 29 Jul 2024 01:16:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:16:46 +0000   Mon, 29 Jul 2024 01:16:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:16:46 +0000   Mon, 29 Jul 2024 01:16:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-845088-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 71d77df4f03a4876b498a96bcef9ff64
	  System UUID:                71d77df4-f03a-4876-b498-a96bcef9ff64
	  Boot ID:                    9a5d441a-4671-4485-9dfe-2906c2e77a95
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dbfgn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-845088-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-p87gx                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-845088-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-845088-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-j6gxl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-845088-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-845088-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 87s                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                    node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-845088-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-845088-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-845088-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	  Normal  NodeNotReady             8m50s                  node-controller  Node ha-845088-m02 status is now: NodeNotReady
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m11s)  kubelet          Node ha-845088-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m11s)  kubelet          Node ha-845088-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x7 over 2m11s)  kubelet          Node ha-845088-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           100s                   node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	  Normal  RegisteredNode           92s                    node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	  Normal  RegisteredNode           27s                    node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	
	
	Name:               ha-845088-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-845088-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=ha-845088
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T01_06_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:06:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-845088-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:17:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:17:20 +0000   Mon, 29 Jul 2024 01:16:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:17:20 +0000   Mon, 29 Jul 2024 01:16:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:17:20 +0000   Mon, 29 Jul 2024 01:16:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:17:20 +0000   Mon, 29 Jul 2024 01:16:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.243
	  Hostname:    ha-845088-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a156142ecc543bebea07e4da7f3d99e
	  System UUID:                1a156142-ecc5-43be-bea0-7e4da7f3d99e
	  Boot ID:                    3d0a3f09-80b0-4ae7-a77a-faeba1b4e0dc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wvsr6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-845088-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-fvw2k                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-845088-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-845088-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-f4965                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-845088-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-845088-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 40s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-845088-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-845088-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-845088-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-845088-m03 event: Registered Node ha-845088-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-845088-m03 event: Registered Node ha-845088-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-845088-m03 event: Registered Node ha-845088-m03 in Controller
	  Normal   RegisteredNode           99s                node-controller  Node ha-845088-m03 event: Registered Node ha-845088-m03 in Controller
	  Normal   RegisteredNode           92s                node-controller  Node ha-845088-m03 event: Registered Node ha-845088-m03 in Controller
	  Normal   NodeNotReady             59s                node-controller  Node ha-845088-m03 status is now: NodeNotReady
	  Normal   Starting                 58s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  58s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  58s (x2 over 58s)  kubelet          Node ha-845088-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x2 over 58s)  kubelet          Node ha-845088-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x2 over 58s)  kubelet          Node ha-845088-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 58s                kubelet          Node ha-845088-m03 has been rebooted, boot id: 3d0a3f09-80b0-4ae7-a77a-faeba1b4e0dc
	  Normal   NodeReady                58s                kubelet          Node ha-845088-m03 status is now: NodeReady
	  Normal   RegisteredNode           27s                node-controller  Node ha-845088-m03 event: Registered Node ha-845088-m03 in Controller
	
	
	Name:               ha-845088-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-845088-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=ha-845088
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T01_07_37_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:07:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-845088-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:17:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:17:39 +0000   Mon, 29 Jul 2024 01:17:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:17:39 +0000   Mon, 29 Jul 2024 01:17:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:17:39 +0000   Mon, 29 Jul 2024 01:17:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:17:39 +0000   Mon, 29 Jul 2024 01:17:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.136
	  Hostname:    ha-845088-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f15978c17b794a0dab280aaa8e6fe8a4
	  System UUID:                f15978c1-7b79-4a0d-ab28-0aaa8e6fe8a4
	  Boot ID:                    a8fabdf9-eba1-4579-ba9a-6e7ee437c264
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rffd2       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-bbp9f    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-845088-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-845088-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-845088-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal   NodeReady                9m51s              kubelet          Node ha-845088-m04 status is now: NodeReady
	  Normal   RegisteredNode           99s                node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal   RegisteredNode           92s                node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal   NodeNotReady             59s                node-controller  Node ha-845088-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           27s                node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)    kubelet          Node ha-845088-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)    kubelet          Node ha-845088-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)    kubelet          Node ha-845088-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s (x2 over 9s)    kubelet          Node ha-845088-m04 has been rebooted, boot id: a8fabdf9-eba1-4579-ba9a-6e7ee437c264
	  Normal   NodeReady                9s (x2 over 9s)    kubelet          Node ha-845088-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.177713] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.054473] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057858] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.159603] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.120915] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.261683] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.164596] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +4.624660] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.060939] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.270727] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.083870] kauditd_printk_skb: 79 callbacks suppressed
	[Jul29 01:04] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.392423] kauditd_printk_skb: 29 callbacks suppressed
	[Jul29 01:05] kauditd_printk_skb: 24 callbacks suppressed
	[Jul29 01:15] systemd-fstab-generator[3679]: Ignoring "noauto" option for root device
	[  +0.221001] systemd-fstab-generator[3743]: Ignoring "noauto" option for root device
	[  +0.194798] systemd-fstab-generator[3766]: Ignoring "noauto" option for root device
	[  +0.141555] systemd-fstab-generator[3778]: Ignoring "noauto" option for root device
	[  +0.290189] systemd-fstab-generator[3806]: Ignoring "noauto" option for root device
	[  +0.847701] systemd-fstab-generator[3916]: Ignoring "noauto" option for root device
	[  +3.511612] kauditd_printk_skb: 140 callbacks suppressed
	[  +5.187104] kauditd_printk_skb: 84 callbacks suppressed
	[ +32.214030] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46] <==
	{"level":"info","ts":"2024-07-29T01:13:42.135812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T01:13:42.135847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T01:13:42.135858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b received MsgPreVoteResp from 9199217ddd03919b at term 2"}
	{"level":"info","ts":"2024-07-29T01:13:42.135871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b [logterm: 2, index: 2317] sent MsgPreVote request to 3ba77f52b23533d8 at term 2"}
	{"level":"info","ts":"2024-07-29T01:13:42.135878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b [logterm: 2, index: 2317] sent MsgPreVote request to 971410e140380cd2 at term 2"}
	{"level":"warn","ts":"2024-07-29T01:13:42.172584Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.69:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T01:13:42.172693Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.69:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T01:13:42.172792Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"9199217ddd03919b","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T01:13:42.173149Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"971410e140380cd2"}
	{"level":"info","ts":"2024-07-29T01:13:42.173273Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"971410e140380cd2"}
	{"level":"info","ts":"2024-07-29T01:13:42.173358Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"971410e140380cd2"}
	{"level":"info","ts":"2024-07-29T01:13:42.173543Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"9199217ddd03919b","remote-peer-id":"971410e140380cd2"}
	{"level":"info","ts":"2024-07-29T01:13:42.173659Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9199217ddd03919b","remote-peer-id":"971410e140380cd2"}
	{"level":"info","ts":"2024-07-29T01:13:42.17379Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"9199217ddd03919b","remote-peer-id":"971410e140380cd2"}
	{"level":"info","ts":"2024-07-29T01:13:42.173839Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"971410e140380cd2"}
	{"level":"info","ts":"2024-07-29T01:13:42.173851Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:13:42.173865Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:13:42.173921Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:13:42.174096Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"9199217ddd03919b","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:13:42.174165Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9199217ddd03919b","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:13:42.174262Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"9199217ddd03919b","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:13:42.174306Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:13:42.178672Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-07-29T01:13:42.17909Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-07-29T01:13:42.179123Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-845088","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.69:2380"],"advertise-client-urls":["https://192.168.39.69:2379"]}
	
	
	==> etcd [98efb6dd5b438577ccc769b4d9d48b9c0c7166de239d9e7f38d2eda3fc94b140] <==
	{"level":"warn","ts":"2024-07-29T01:16:49.262139Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.243:2380/version","remote-member-id":"3ba77f52b23533d8","error":"Get \"https://192.168.39.243:2380/version\": dial tcp 192.168.39.243:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T01:16:49.262341Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3ba77f52b23533d8","error":"Get \"https://192.168.39.243:2380/version\": dial tcp 192.168.39.243:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T01:16:50.247257Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3ba77f52b23533d8","rtt":"0s","error":"dial tcp 192.168.39.243:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T01:16:50.247342Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3ba77f52b23533d8","rtt":"0s","error":"dial tcp 192.168.39.243:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T01:16:53.265113Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.243:2380/version","remote-member-id":"3ba77f52b23533d8","error":"Get \"https://192.168.39.243:2380/version\": dial tcp 192.168.39.243:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T01:16:53.265258Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3ba77f52b23533d8","error":"Get \"https://192.168.39.243:2380/version\": dial tcp 192.168.39.243:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T01:16:55.247957Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3ba77f52b23533d8","rtt":"0s","error":"dial tcp 192.168.39.243:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T01:16:55.248129Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3ba77f52b23533d8","rtt":"0s","error":"dial tcp 192.168.39.243:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T01:16:57.267257Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.243:2380/version","remote-member-id":"3ba77f52b23533d8","error":"Get \"https://192.168.39.243:2380/version\": dial tcp 192.168.39.243:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T01:16:57.267379Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3ba77f52b23533d8","error":"Get \"https://192.168.39.243:2380/version\": dial tcp 192.168.39.243:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T01:17:00.249302Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3ba77f52b23533d8","rtt":"0s","error":"dial tcp 192.168.39.243:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T01:17:00.24939Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3ba77f52b23533d8","rtt":"0s","error":"dial tcp 192.168.39.243:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T01:17:01.269786Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.243:2380/version","remote-member-id":"3ba77f52b23533d8","error":"Get \"https://192.168.39.243:2380/version\": dial tcp 192.168.39.243:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T01:17:01.269935Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3ba77f52b23533d8","error":"Get \"https://192.168.39.243:2380/version\": dial tcp 192.168.39.243:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-29T01:17:03.111899Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:17:03.111966Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"9199217ddd03919b","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:17:03.112238Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9199217ddd03919b","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:17:03.155901Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"9199217ddd03919b","to":"3ba77f52b23533d8","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-29T01:17:03.156292Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"9199217ddd03919b","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:17:03.16061Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"9199217ddd03919b","to":"3ba77f52b23533d8","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T01:17:03.160709Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"9199217ddd03919b","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:17:06.264837Z","caller":"traceutil/trace.go:171","msg":"trace[949785076] linearizableReadLoop","detail":"{readStateIndex:2881; appliedIndex:2881; }","duration":"118.865621ms","start":"2024-07-29T01:17:06.14593Z","end":"2024-07-29T01:17:06.264795Z","steps":["trace[949785076] 'read index received'  (duration: 118.860181ms)","trace[949785076] 'applied index is now lower than readState.Index'  (duration: 4.221µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T01:17:06.265361Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.334957ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-845088-m03\" ","response":"range_response_count:1 size:5801"}
	{"level":"info","ts":"2024-07-29T01:17:06.26546Z","caller":"traceutil/trace.go:171","msg":"trace[350761670] transaction","detail":"{read_only:false; response_revision:2482; number_of_response:1; }","duration":"133.606729ms","start":"2024-07-29T01:17:06.131826Z","end":"2024-07-29T01:17:06.265432Z","steps":["trace[350761670] 'process raft request'  (duration: 133.109155ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T01:17:06.265595Z","caller":"traceutil/trace.go:171","msg":"trace[1628044369] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-845088-m03; range_end:; response_count:1; response_revision:2481; }","duration":"119.563487ms","start":"2024-07-29T01:17:06.145924Z","end":"2024-07-29T01:17:06.265488Z","steps":["trace[1628044369] 'agreement among raft nodes before linearized reading'  (duration: 119.05081ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:17:48 up 14 min,  0 users,  load average: 0.23, 0.39, 0.28
	Linux ha-845088 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5540fc40e2a7f15b320e56377d094ce83a31f00a2df550fc0c5a34c0a6b53f67] <==
	I0729 01:17:10.642362       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:17:20.632572       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:17:20.632796       1 main.go:299] handling current node
	I0729 01:17:20.632833       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:17:20.632853       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:17:20.633118       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0729 01:17:20.633154       1 main.go:322] Node ha-845088-m03 has CIDR [10.244.2.0/24] 
	I0729 01:17:20.633219       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:17:20.633237       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:17:30.637550       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:17:30.637635       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:17:30.637947       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0729 01:17:30.637967       1 main.go:322] Node ha-845088-m03 has CIDR [10.244.2.0/24] 
	I0729 01:17:30.638126       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:17:30.638189       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:17:30.638270       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:17:30.638278       1 main.go:299] handling current node
	I0729 01:17:40.632170       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:17:40.632400       1 main.go:299] handling current node
	I0729 01:17:40.632459       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:17:40.632498       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:17:40.632742       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0729 01:17:40.632813       1 main.go:322] Node ha-845088-m03 has CIDR [10.244.2.0/24] 
	I0729 01:17:40.632977       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:17:40.633139       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198] <==
	I0729 01:13:06.416513       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:13:16.406930       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:13:16.407058       1 main.go:299] handling current node
	I0729 01:13:16.407088       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:13:16.407094       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:13:16.407363       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0729 01:13:16.407372       1 main.go:322] Node ha-845088-m03 has CIDR [10.244.2.0/24] 
	I0729 01:13:16.407437       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:13:16.407442       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:13:26.408274       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0729 01:13:26.408380       1 main.go:322] Node ha-845088-m03 has CIDR [10.244.2.0/24] 
	I0729 01:13:26.408793       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:13:26.408837       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:13:26.409091       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:13:26.409102       1 main.go:299] handling current node
	I0729 01:13:26.409115       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:13:26.409119       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:13:36.414695       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:13:36.414759       1 main.go:299] handling current node
	I0729 01:13:36.414788       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:13:36.414793       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:13:36.414954       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0729 01:13:36.414994       1 main.go:322] Node ha-845088-m03 has CIDR [10.244.2.0/24] 
	I0729 01:13:36.415136       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:13:36.415162       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c6a3220dc04b24fbdf00cd236d9a94f8a9523d2f00a3de205e6a608590ddc250] <==
	I0729 01:16:01.966212       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 01:16:01.966329       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 01:16:02.023328       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 01:16:02.023364       1 policy_source.go:224] refreshing policies
	I0729 01:16:02.040529       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 01:16:02.045347       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 01:16:02.045678       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 01:16:02.047522       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 01:16:02.047554       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 01:16:02.047632       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 01:16:02.055579       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 01:16:02.057780       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 01:16:02.057913       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 01:16:02.057952       1 aggregator.go:165] initial CRD sync complete...
	I0729 01:16:02.057981       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 01:16:02.057988       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 01:16:02.057995       1 cache.go:39] Caches are synced for autoregister controller
	I0729 01:16:02.064082       1 shared_informer.go:320] Caches are synced for node_authorizer
	W0729 01:16:02.067833       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.243 192.168.39.68]
	I0729 01:16:02.070389       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 01:16:02.080124       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0729 01:16:02.086998       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0729 01:16:02.965177       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 01:16:03.419551       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.243 192.168.39.68 192.168.39.69]
	W0729 01:16:13.418763       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.68 192.168.39.69]
	
	
	==> kube-apiserver [d805fa439728f540801adc68dca53128909d2149cfbc2c7c5e877d34560ae3e0] <==
	I0729 01:15:19.725379       1 options.go:221] external host was not specified, using 192.168.39.69
	I0729 01:15:19.726409       1 server.go:148] Version: v1.30.3
	I0729 01:15:19.726460       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:15:20.604975       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 01:15:20.654084       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 01:15:20.655191       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 01:15:20.655257       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 01:15:20.655631       1 instance.go:299] Using reconciler: lease
	W0729 01:15:40.605672       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0729 01:15:40.605917       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0729 01:15:40.661089       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [5792cd9b8f1980e8adbf4a5b3167ab46c050a8ff4c196487e0288fcb3a808571] <==
	I0729 01:15:20.866207       1 serving.go:380] Generated self-signed cert in-memory
	I0729 01:15:21.295598       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 01:15:21.295637       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:15:21.297290       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 01:15:21.297932       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 01:15:21.298104       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 01:15:21.298184       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 01:15:41.669127       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.69:8443/healthz\": dial tcp 192.168.39.69:8443: connect: connection refused"
	
	
	==> kube-controller-manager [6ae848e053a413d6390d563c86e66749f80257ee3338a05474d65c7fe52e17a2] <==
	I0729 01:16:16.740950       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 01:16:16.753654       1 shared_informer.go:320] Caches are synced for namespace
	I0729 01:16:16.754195       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 01:16:16.759079       1 shared_informer.go:320] Caches are synced for job
	I0729 01:16:16.764142       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 01:16:16.780356       1 shared_informer.go:320] Caches are synced for disruption
	I0729 01:16:16.786230       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 01:16:16.830146       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 01:16:16.855915       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 01:16:16.918440       1 shared_informer.go:320] Caches are synced for HPA
	I0729 01:16:16.962239       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0729 01:16:17.368534       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 01:16:17.417386       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 01:16:17.417510       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 01:16:27.360571       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-xmlfm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-xmlfm\": the object has been modified; please apply your changes to the latest version and try again"
	I0729 01:16:27.361093       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"05ed8c70-6ebe-4528-af13-063d52719c0e", APIVersion:"v1", ResourceVersion:"252", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-xmlfm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-xmlfm": the object has been modified; please apply your changes to the latest version and try again
	I0729 01:16:27.373271       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="84.987006ms"
	I0729 01:16:27.389683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.906079ms"
	I0729 01:16:27.390541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.879µs"
	I0729 01:16:49.283308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.71742ms"
	I0729 01:16:49.283876       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="212.142µs"
	I0729 01:16:51.265788       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.682µs"
	I0729 01:17:14.568235       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.055453ms"
	I0729 01:17:14.568693       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.996µs"
	I0729 01:17:39.464158       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-845088-m04"
	
	
	==> kube-proxy [8ca67b68988769cf0f2629127e88cb8e28d64711c5477e47cbd0260940c95451] <==
	I0729 01:15:20.675213       1 server_linux.go:69] "Using iptables proxy"
	E0729 01:15:23.530690       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-845088\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 01:15:26.603456       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-845088\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 01:15:29.675426       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-845088\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 01:15:35.819993       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-845088\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 01:15:48.107276       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-845088\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0729 01:16:04.529528       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	I0729 01:16:04.620328       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 01:16:04.620401       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 01:16:04.620425       1 server_linux.go:165] "Using iptables Proxier"
	I0729 01:16:04.630104       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 01:16:04.630718       1 server.go:872] "Version info" version="v1.30.3"
	I0729 01:16:04.630844       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:16:04.634484       1 config.go:192] "Starting service config controller"
	I0729 01:16:04.634653       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 01:16:04.634712       1 config.go:101] "Starting endpoint slice config controller"
	I0729 01:16:04.634741       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 01:16:04.635288       1 config.go:319] "Starting node config controller"
	I0729 01:16:04.635342       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 01:16:04.736199       1 shared_informer.go:320] Caches are synced for node config
	I0729 01:16:04.736258       1 shared_informer.go:320] Caches are synced for service config
	I0729 01:16:04.736291       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8] <==
	E0729 01:12:37.645168       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:40.715755       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1901": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:40.715857       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1901": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:40.716110       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:40.716215       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:40.716311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-845088&resourceVersion=1971": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:40.716368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-845088&resourceVersion=1971": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:46.860211       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-845088&resourceVersion=1971": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:46.860274       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-845088&resourceVersion=1971": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:46.860361       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1901": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:46.860395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1901": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:49.932271       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:49.932760       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:56.075906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-845088&resourceVersion=1971": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:56.076313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-845088&resourceVersion=1971": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:59.147815       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:59.148052       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:59.148306       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1901": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:59.148402       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1901": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:13:14.508328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-845088&resourceVersion=1971": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:13:14.508447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-845088&resourceVersion=1971": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:13:17.579066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1901": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:13:17.579317       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1901": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:13:26.794600       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:13:26.794777       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [416edea5a4ef1589f0f30884e6c1c4c26063ba7ccc13ee4f90d22801464de2ca] <==
	W0729 01:15:57.073806       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.69:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	E0729 01:15:57.073915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.69:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	W0729 01:15:57.319973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.69:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	E0729 01:15:57.320150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.69:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	W0729 01:15:58.211285       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.69:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	E0729 01:15:58.211356       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.69:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	W0729 01:15:58.272879       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.69:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	E0729 01:15:58.272955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.69:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	W0729 01:15:58.509745       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.69:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	E0729 01:15:58.509808       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.69:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	W0729 01:15:58.805710       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.69:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	E0729 01:15:58.805828       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.69:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	W0729 01:15:59.070229       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.69:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	E0729 01:15:59.070285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.69:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	W0729 01:15:59.520990       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.69:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	E0729 01:15:59.521171       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.69:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	W0729 01:16:01.971363       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 01:16:01.971412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 01:16:01.971495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 01:16:01.971525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 01:16:01.971572       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 01:16:01.971608       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 01:16:01.971825       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 01:16:01.971956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0729 01:16:13.475175       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60] <==
	W0729 01:13:33.533113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 01:13:33.533215       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 01:13:33.721255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 01:13:33.721302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 01:13:33.841816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 01:13:33.841882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 01:13:34.184983       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 01:13:34.185075       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 01:13:35.022484       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 01:13:35.022572       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 01:13:35.339307       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 01:13:35.339364       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 01:13:35.573977       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 01:13:35.574080       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 01:13:35.854281       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 01:13:35.854369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 01:13:41.004073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 01:13:41.004179       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 01:13:41.409342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 01:13:41.409373       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 01:13:41.603564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 01:13:41.603603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 01:13:41.979457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 01:13:41.979504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 01:13:42.070471       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 01:15:56 ha-845088 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:15:56 ha-845088 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:15:56 ha-845088 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:15:56 ha-845088 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:15:57 ha-845088 kubelet[1372]: I0729 01:15:57.323380    1372 status_manager.go:853] "Failed to get status for pod" podUID="8a82577ef7e027cb45d5457528698a5d" pod="kube-system/kube-controller-manager-ha-845088" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-845088\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 01:16:00 ha-845088 kubelet[1372]: I0729 01:16:00.119102    1372 scope.go:117] "RemoveContainer" containerID="d805fa439728f540801adc68dca53128909d2149cfbc2c7c5e877d34560ae3e0"
	Jul 29 01:16:00 ha-845088 kubelet[1372]: E0729 01:16:00.394420    1372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-845088?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 29 01:16:00 ha-845088 kubelet[1372]: I0729 01:16:00.394435    1372 status_manager.go:853] "Failed to get status for pod" podUID="f2e92bb0-87c0-4d4e-ae34-d67970a61dc9" pod="kube-system/kube-proxy-tmzt7" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmzt7\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 01:16:03 ha-845088 kubelet[1372]: E0729 01:16:03.466397    1372 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-845088\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-845088?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 01:16:03 ha-845088 kubelet[1372]: I0729 01:16:03.466451    1372 status_manager.go:853] "Failed to get status for pod" podUID="3d184fd2-5bfc-40bd-b7b3-98934d58a689" pod="kube-system/kindnet-jz7gr" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-jz7gr\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 01:16:04 ha-845088 kubelet[1372]: I0729 01:16:04.120065    1372 scope.go:117] "RemoveContainer" containerID="7cea69b0d5cde22d017f386be1db032ca47eb2c7bdf0c86ee668e1f85c517c3f"
	Jul 29 01:16:04 ha-845088 kubelet[1372]: E0729 01:16:04.120375    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9b770bc2-7368-4b86-89ff-399d60f17817)\"" pod="kube-system/storage-provisioner" podUID="9b770bc2-7368-4b86-89ff-399d60f17817"
	Jul 29 01:16:05 ha-845088 kubelet[1372]: I0729 01:16:05.119377    1372 scope.go:117] "RemoveContainer" containerID="5792cd9b8f1980e8adbf4a5b3167ab46c050a8ff4c196487e0288fcb3a808571"
	Jul 29 01:16:16 ha-845088 kubelet[1372]: I0729 01:16:16.131747    1372 scope.go:117] "RemoveContainer" containerID="7cea69b0d5cde22d017f386be1db032ca47eb2c7bdf0c86ee668e1f85c517c3f"
	Jul 29 01:16:16 ha-845088 kubelet[1372]: E0729 01:16:16.131932    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9b770bc2-7368-4b86-89ff-399d60f17817)\"" pod="kube-system/storage-provisioner" podUID="9b770bc2-7368-4b86-89ff-399d60f17817"
	Jul 29 01:16:30 ha-845088 kubelet[1372]: I0729 01:16:30.507245    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-kdxhf" podStartSLOduration=570.86378131 podStartE2EDuration="9m33.507210583s" podCreationTimestamp="2024-07-29 01:06:57 +0000 UTC" firstStartedPulling="2024-07-29 01:06:58.205735417 +0000 UTC m=+182.246508671" lastFinishedPulling="2024-07-29 01:07:00.84916469 +0000 UTC m=+184.889937944" observedRunningTime="2024-07-29 01:07:01.893232065 +0000 UTC m=+185.934005339" watchObservedRunningTime="2024-07-29 01:16:30.507210583 +0000 UTC m=+754.547983855"
	Jul 29 01:16:31 ha-845088 kubelet[1372]: I0729 01:16:31.119409    1372 scope.go:117] "RemoveContainer" containerID="7cea69b0d5cde22d017f386be1db032ca47eb2c7bdf0c86ee668e1f85c517c3f"
	Jul 29 01:16:56 ha-845088 kubelet[1372]: E0729 01:16:56.144949    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:16:56 ha-845088 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:16:56 ha-845088 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:16:56 ha-845088 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:16:56 ha-845088 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:16:57 ha-845088 kubelet[1372]: I0729 01:16:57.120859    1372 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-845088" podUID="23429e30-003b-4bf2-9ab0-fb4d2a2ee5c8"
	Jul 29 01:16:57 ha-845088 kubelet[1372]: I0729 01:16:57.142966    1372 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-845088"
	Jul 29 01:17:06 ha-845088 kubelet[1372]: I0729 01:17:06.268630    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-845088" podStartSLOduration=9.268585298 podStartE2EDuration="9.268585298s" podCreationTimestamp="2024-07-29 01:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-29 01:17:06.267830448 +0000 UTC m=+790.308603723" watchObservedRunningTime="2024-07-29 01:17:06.268585298 +0000 UTC m=+790.309358574"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 01:17:47.059147   35624 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-9421/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-845088 -n ha-845088
helpers_test.go:261: (dbg) Run:  kubectl --context ha-845088 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-845088 stop -v=7 --alsologtostderr: exit status 82 (2m0.47565451s)

                                                
                                                
-- stdout --
	* Stopping node "ha-845088-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:18:07.832253   36032 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:18:07.832498   36032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:18:07.832506   36032 out.go:304] Setting ErrFile to fd 2...
	I0729 01:18:07.832511   36032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:18:07.832673   36032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:18:07.832886   36032 out.go:298] Setting JSON to false
	I0729 01:18:07.832958   36032 mustload.go:65] Loading cluster: ha-845088
	I0729 01:18:07.833295   36032 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:18:07.833376   36032 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:18:07.833563   36032 mustload.go:65] Loading cluster: ha-845088
	I0729 01:18:07.833691   36032 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:18:07.833711   36032 stop.go:39] StopHost: ha-845088-m04
	I0729 01:18:07.834031   36032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:18:07.834068   36032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:18:07.849120   36032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43417
	I0729 01:18:07.849631   36032 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:18:07.850209   36032 main.go:141] libmachine: Using API Version  1
	I0729 01:18:07.850231   36032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:18:07.850593   36032 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:18:07.853031   36032 out.go:177] * Stopping node "ha-845088-m04"  ...
	I0729 01:18:07.854297   36032 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 01:18:07.854340   36032 main.go:141] libmachine: (ha-845088-m04) Calling .DriverName
	I0729 01:18:07.854590   36032 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 01:18:07.854622   36032 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHHostname
	I0729 01:18:07.857475   36032 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:18:07.858001   36032 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:17:33 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:18:07.858026   36032 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:18:07.858240   36032 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHPort
	I0729 01:18:07.858444   36032 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHKeyPath
	I0729 01:18:07.858613   36032 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHUsername
	I0729 01:18:07.858772   36032 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m04/id_rsa Username:docker}
	I0729 01:18:07.947474   36032 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 01:18:08.001050   36032 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 01:18:08.054821   36032 main.go:141] libmachine: Stopping "ha-845088-m04"...
	I0729 01:18:08.054864   36032 main.go:141] libmachine: (ha-845088-m04) Calling .GetState
	I0729 01:18:08.056753   36032 main.go:141] libmachine: (ha-845088-m04) Calling .Stop
	I0729 01:18:08.060634   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 0/120
	I0729 01:18:09.062123   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 1/120
	I0729 01:18:10.063584   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 2/120
	I0729 01:18:11.065285   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 3/120
	I0729 01:18:12.066878   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 4/120
	I0729 01:18:13.068609   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 5/120
	I0729 01:18:14.070232   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 6/120
	I0729 01:18:15.071570   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 7/120
	I0729 01:18:16.073484   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 8/120
	I0729 01:18:17.074682   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 9/120
	I0729 01:18:18.077016   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 10/120
	I0729 01:18:19.078231   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 11/120
	I0729 01:18:20.079438   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 12/120
	I0729 01:18:21.080663   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 13/120
	I0729 01:18:22.083003   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 14/120
	I0729 01:18:23.084438   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 15/120
	I0729 01:18:24.086260   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 16/120
	I0729 01:18:25.087588   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 17/120
	I0729 01:18:26.089016   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 18/120
	I0729 01:18:27.090412   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 19/120
	I0729 01:18:28.092183   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 20/120
	I0729 01:18:29.094278   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 21/120
	I0729 01:18:30.095966   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 22/120
	I0729 01:18:31.097193   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 23/120
	I0729 01:18:32.098396   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 24/120
	I0729 01:18:33.100360   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 25/120
	I0729 01:18:34.101517   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 26/120
	I0729 01:18:35.102881   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 27/120
	I0729 01:18:36.104059   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 28/120
	I0729 01:18:37.105554   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 29/120
	I0729 01:18:38.107931   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 30/120
	I0729 01:18:39.109181   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 31/120
	I0729 01:18:40.110607   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 32/120
	I0729 01:18:41.112447   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 33/120
	I0729 01:18:42.114396   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 34/120
	I0729 01:18:43.116559   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 35/120
	I0729 01:18:44.118332   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 36/120
	I0729 01:18:45.119527   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 37/120
	I0729 01:18:46.121192   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 38/120
	I0729 01:18:47.122573   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 39/120
	I0729 01:18:48.124916   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 40/120
	I0729 01:18:49.126857   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 41/120
	I0729 01:18:50.128158   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 42/120
	I0729 01:18:51.129498   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 43/120
	I0729 01:18:52.131726   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 44/120
	I0729 01:18:53.133799   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 45/120
	I0729 01:18:54.135417   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 46/120
	I0729 01:18:55.136951   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 47/120
	I0729 01:18:56.138285   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 48/120
	I0729 01:18:57.139903   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 49/120
	I0729 01:18:58.142372   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 50/120
	I0729 01:18:59.143660   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 51/120
	I0729 01:19:00.145984   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 52/120
	I0729 01:19:01.147478   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 53/120
	I0729 01:19:02.149838   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 54/120
	I0729 01:19:03.151858   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 55/120
	I0729 01:19:04.154004   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 56/120
	I0729 01:19:05.155387   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 57/120
	I0729 01:19:06.156861   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 58/120
	I0729 01:19:07.158221   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 59/120
	I0729 01:19:08.160418   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 60/120
	I0729 01:19:09.161780   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 61/120
	I0729 01:19:10.163962   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 62/120
	I0729 01:19:11.165390   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 63/120
	I0729 01:19:12.166739   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 64/120
	I0729 01:19:13.168778   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 65/120
	I0729 01:19:14.170295   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 66/120
	I0729 01:19:15.171842   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 67/120
	I0729 01:19:16.173265   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 68/120
	I0729 01:19:17.174823   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 69/120
	I0729 01:19:18.177030   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 70/120
	I0729 01:19:19.178459   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 71/120
	I0729 01:19:20.179961   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 72/120
	I0729 01:19:21.181380   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 73/120
	I0729 01:19:22.182778   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 74/120
	I0729 01:19:23.184685   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 75/120
	I0729 01:19:24.186025   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 76/120
	I0729 01:19:25.187320   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 77/120
	I0729 01:19:26.189498   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 78/120
	I0729 01:19:27.190804   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 79/120
	I0729 01:19:28.192893   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 80/120
	I0729 01:19:29.194303   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 81/120
	I0729 01:19:30.195763   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 82/120
	I0729 01:19:31.197372   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 83/120
	I0729 01:19:32.198712   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 84/120
	I0729 01:19:33.200365   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 85/120
	I0729 01:19:34.201715   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 86/120
	I0729 01:19:35.203035   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 87/120
	I0729 01:19:36.204394   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 88/120
	I0729 01:19:37.206571   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 89/120
	I0729 01:19:38.208463   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 90/120
	I0729 01:19:39.209830   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 91/120
	I0729 01:19:40.211177   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 92/120
	I0729 01:19:41.213607   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 93/120
	I0729 01:19:42.215475   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 94/120
	I0729 01:19:43.217382   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 95/120
	I0729 01:19:44.218794   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 96/120
	I0729 01:19:45.220033   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 97/120
	I0729 01:19:46.222263   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 98/120
	I0729 01:19:47.223416   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 99/120
	I0729 01:19:48.225260   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 100/120
	I0729 01:19:49.226410   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 101/120
	I0729 01:19:50.228182   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 102/120
	I0729 01:19:51.230305   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 103/120
	I0729 01:19:52.232494   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 104/120
	I0729 01:19:53.234558   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 105/120
	I0729 01:19:54.236026   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 106/120
	I0729 01:19:55.237530   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 107/120
	I0729 01:19:56.239556   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 108/120
	I0729 01:19:57.241656   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 109/120
	I0729 01:19:58.243700   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 110/120
	I0729 01:19:59.245527   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 111/120
	I0729 01:20:00.246719   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 112/120
	I0729 01:20:01.247953   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 113/120
	I0729 01:20:02.249602   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 114/120
	I0729 01:20:03.251833   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 115/120
	I0729 01:20:04.253254   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 116/120
	I0729 01:20:05.254684   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 117/120
	I0729 01:20:06.256347   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 118/120
	I0729 01:20:07.257777   36032 main.go:141] libmachine: (ha-845088-m04) Waiting for machine to stop 119/120
	I0729 01:20:08.259203   36032 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 01:20:08.259284   36032 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 01:20:08.261147   36032 out.go:177] 
	W0729 01:20:08.262443   36032 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 01:20:08.262454   36032 out.go:239] * 
	* 
	W0729 01:20:08.264679   36032 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 01:20:08.265794   36032 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-845088 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr: exit status 3 (18.9641655s)

                                                
                                                
-- stdout --
	ha-845088
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-845088-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:20:08.309366   36517 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:20:08.309592   36517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:20:08.309600   36517 out.go:304] Setting ErrFile to fd 2...
	I0729 01:20:08.309604   36517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:20:08.309782   36517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:20:08.309932   36517 out.go:298] Setting JSON to false
	I0729 01:20:08.309953   36517 mustload.go:65] Loading cluster: ha-845088
	I0729 01:20:08.310004   36517 notify.go:220] Checking for updates...
	I0729 01:20:08.310310   36517 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:20:08.310323   36517 status.go:255] checking status of ha-845088 ...
	I0729 01:20:08.310712   36517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:20:08.310778   36517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:20:08.329529   36517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35161
	I0729 01:20:08.329908   36517 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:20:08.330415   36517 main.go:141] libmachine: Using API Version  1
	I0729 01:20:08.330441   36517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:20:08.330845   36517 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:20:08.331077   36517 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:20:08.332756   36517 status.go:330] ha-845088 host status = "Running" (err=<nil>)
	I0729 01:20:08.332770   36517 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:20:08.333054   36517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:20:08.333104   36517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:20:08.347476   36517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37423
	I0729 01:20:08.347911   36517 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:20:08.348395   36517 main.go:141] libmachine: Using API Version  1
	I0729 01:20:08.348426   36517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:20:08.348788   36517 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:20:08.348988   36517 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:20:08.351766   36517 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:20:08.352139   36517 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:20:08.352170   36517 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:20:08.352371   36517 host.go:66] Checking if "ha-845088" exists ...
	I0729 01:20:08.352651   36517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:20:08.352685   36517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:20:08.367435   36517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45033
	I0729 01:20:08.367848   36517 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:20:08.368290   36517 main.go:141] libmachine: Using API Version  1
	I0729 01:20:08.368319   36517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:20:08.368661   36517 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:20:08.368818   36517 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:20:08.369049   36517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:20:08.369075   36517 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:20:08.371801   36517 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:20:08.372440   36517 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:20:08.372470   36517 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:20:08.372643   36517 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:20:08.372833   36517 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:20:08.372995   36517 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:20:08.373198   36517 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:20:08.457531   36517 ssh_runner.go:195] Run: systemctl --version
	I0729 01:20:08.465255   36517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:20:08.489070   36517 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:20:08.489099   36517 api_server.go:166] Checking apiserver status ...
	I0729 01:20:08.489149   36517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:20:08.505497   36517 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5234/cgroup
	W0729 01:20:08.515727   36517 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5234/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:20:08.515793   36517 ssh_runner.go:195] Run: ls
	I0729 01:20:08.519992   36517 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:20:08.524174   36517 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:20:08.524194   36517 status.go:422] ha-845088 apiserver status = Running (err=<nil>)
	I0729 01:20:08.524205   36517 status.go:257] ha-845088 status: &{Name:ha-845088 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:20:08.524228   36517 status.go:255] checking status of ha-845088-m02 ...
	I0729 01:20:08.524521   36517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:20:08.524560   36517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:20:08.539524   36517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I0729 01:20:08.540021   36517 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:20:08.540549   36517 main.go:141] libmachine: Using API Version  1
	I0729 01:20:08.540569   36517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:20:08.540931   36517 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:20:08.541128   36517 main.go:141] libmachine: (ha-845088-m02) Calling .GetState
	I0729 01:20:08.542771   36517 status.go:330] ha-845088-m02 host status = "Running" (err=<nil>)
	I0729 01:20:08.542788   36517 host.go:66] Checking if "ha-845088-m02" exists ...
	I0729 01:20:08.543226   36517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:20:08.543270   36517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:20:08.558018   36517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46737
	I0729 01:20:08.558435   36517 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:20:08.558890   36517 main.go:141] libmachine: Using API Version  1
	I0729 01:20:08.558916   36517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:20:08.559295   36517 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:20:08.559490   36517 main.go:141] libmachine: (ha-845088-m02) Calling .GetIP
	I0729 01:20:08.562282   36517 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:20:08.562752   36517 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:15:27 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:20:08.562779   36517 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:20:08.562952   36517 host.go:66] Checking if "ha-845088-m02" exists ...
	I0729 01:20:08.563398   36517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:20:08.563444   36517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:20:08.577867   36517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44535
	I0729 01:20:08.578348   36517 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:20:08.578808   36517 main.go:141] libmachine: Using API Version  1
	I0729 01:20:08.578832   36517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:20:08.579240   36517 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:20:08.579404   36517 main.go:141] libmachine: (ha-845088-m02) Calling .DriverName
	I0729 01:20:08.579556   36517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:20:08.579570   36517 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHHostname
	I0729 01:20:08.582409   36517 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:20:08.582767   36517 main.go:141] libmachine: (ha-845088-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:55:54", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:15:27 +0000 UTC Type:0 Mac:52:54:00:d1:55:54 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-845088-m02 Clientid:01:52:54:00:d1:55:54}
	I0729 01:20:08.582795   36517 main.go:141] libmachine: (ha-845088-m02) DBG | domain ha-845088-m02 has defined IP address 192.168.39.68 and MAC address 52:54:00:d1:55:54 in network mk-ha-845088
	I0729 01:20:08.582942   36517 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHPort
	I0729 01:20:08.583131   36517 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHKeyPath
	I0729 01:20:08.583295   36517 main.go:141] libmachine: (ha-845088-m02) Calling .GetSSHUsername
	I0729 01:20:08.583452   36517 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m02/id_rsa Username:docker}
	I0729 01:20:08.666179   36517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:20:08.685088   36517 kubeconfig.go:125] found "ha-845088" server: "https://192.168.39.254:8443"
	I0729 01:20:08.685122   36517 api_server.go:166] Checking apiserver status ...
	I0729 01:20:08.685183   36517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:20:08.701622   36517 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1429/cgroup
	W0729 01:20:08.710781   36517 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1429/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:20:08.710836   36517 ssh_runner.go:195] Run: ls
	I0729 01:20:08.715550   36517 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 01:20:08.719837   36517 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 01:20:08.719875   36517 status.go:422] ha-845088-m02 apiserver status = Running (err=<nil>)
	I0729 01:20:08.719883   36517 status.go:257] ha-845088-m02 status: &{Name:ha-845088-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:20:08.719901   36517 status.go:255] checking status of ha-845088-m04 ...
	I0729 01:20:08.720180   36517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:20:08.720218   36517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:20:08.737475   36517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33357
	I0729 01:20:08.737869   36517 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:20:08.738324   36517 main.go:141] libmachine: Using API Version  1
	I0729 01:20:08.738341   36517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:20:08.738667   36517 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:20:08.738914   36517 main.go:141] libmachine: (ha-845088-m04) Calling .GetState
	I0729 01:20:08.740605   36517 status.go:330] ha-845088-m04 host status = "Running" (err=<nil>)
	I0729 01:20:08.740620   36517 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:20:08.740884   36517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:20:08.740919   36517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:20:08.757092   36517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42207
	I0729 01:20:08.757539   36517 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:20:08.758125   36517 main.go:141] libmachine: Using API Version  1
	I0729 01:20:08.758147   36517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:20:08.758461   36517 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:20:08.758667   36517 main.go:141] libmachine: (ha-845088-m04) Calling .GetIP
	I0729 01:20:08.761612   36517 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:20:08.762048   36517 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:17:33 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:20:08.762075   36517 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:20:08.762235   36517 host.go:66] Checking if "ha-845088-m04" exists ...
	I0729 01:20:08.762534   36517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:20:08.762579   36517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:20:08.777548   36517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35847
	I0729 01:20:08.778030   36517 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:20:08.778534   36517 main.go:141] libmachine: Using API Version  1
	I0729 01:20:08.778560   36517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:20:08.778938   36517 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:20:08.779126   36517 main.go:141] libmachine: (ha-845088-m04) Calling .DriverName
	I0729 01:20:08.779334   36517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:20:08.779356   36517 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHHostname
	I0729 01:20:08.782148   36517 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:20:08.782597   36517 main.go:141] libmachine: (ha-845088-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:1d:28", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:17:33 +0000 UTC Type:0 Mac:52:54:00:99:1d:28 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-845088-m04 Clientid:01:52:54:00:99:1d:28}
	I0729 01:20:08.782624   36517 main.go:141] libmachine: (ha-845088-m04) DBG | domain ha-845088-m04 has defined IP address 192.168.39.136 and MAC address 52:54:00:99:1d:28 in network mk-ha-845088
	I0729 01:20:08.782904   36517 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHPort
	I0729 01:20:08.783085   36517 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHKeyPath
	I0729 01:20:08.783220   36517 main.go:141] libmachine: (ha-845088-m04) Calling .GetSSHUsername
	I0729 01:20:08.783366   36517 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088-m04/id_rsa Username:docker}
	W0729 01:20:27.231249   36517 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.136:22: connect: no route to host
	W0729 01:20:27.231334   36517 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.136:22: connect: no route to host
	E0729 01:20:27.231352   36517 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.136:22: connect: no route to host
	I0729 01:20:27.231359   36517 status.go:257] ha-845088-m04 status: &{Name:ha-845088-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0729 01:20:27.231401   36517 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.136:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-845088 -n ha-845088
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-845088 logs -n 25: (1.735653465s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-845088 ssh -n ha-845088-m02 sudo cat                                          | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m03_ha-845088-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m03:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04:/home/docker/cp-test_ha-845088-m03_ha-845088-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088-m04 sudo cat                                          | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m03_ha-845088-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-845088 cp testdata/cp-test.txt                                                | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2637143725/001/cp-test_ha-845088-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088:/home/docker/cp-test_ha-845088-m04_ha-845088.txt                       |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088 sudo cat                                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m04_ha-845088.txt                                 |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m02:/home/docker/cp-test_ha-845088-m04_ha-845088-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088-m02 sudo cat                                          | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m04_ha-845088-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m03:/home/docker/cp-test_ha-845088-m04_ha-845088-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n                                                                 | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | ha-845088-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-845088 ssh -n ha-845088-m03 sudo cat                                          | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC | 29 Jul 24 01:08 UTC |
	|         | /home/docker/cp-test_ha-845088-m04_ha-845088-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-845088 node stop m02 -v=7                                                     | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-845088 node start m02 -v=7                                                    | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-845088 -v=7                                                           | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-845088 -v=7                                                                | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-845088 --wait=true -v=7                                                    | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:13 UTC | 29 Jul 24 01:17 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-845088                                                                | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:17 UTC |                     |
	| node    | ha-845088 node delete m03 -v=7                                                   | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:17 UTC | 29 Jul 24 01:18 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-845088 stop -v=7                                                              | ha-845088 | jenkins | v1.33.1 | 29 Jul 24 01:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 01:13:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 01:13:41.062101   33891 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:13:41.062214   33891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:13:41.062226   33891 out.go:304] Setting ErrFile to fd 2...
	I0729 01:13:41.062232   33891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:13:41.062459   33891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:13:41.063025   33891 out.go:298] Setting JSON to false
	I0729 01:13:41.063961   33891 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3367,"bootTime":1722212254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 01:13:41.064022   33891 start.go:139] virtualization: kvm guest
	I0729 01:13:41.066487   33891 out.go:177] * [ha-845088] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 01:13:41.068271   33891 notify.go:220] Checking for updates...
	I0729 01:13:41.068316   33891 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 01:13:41.070022   33891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 01:13:41.071746   33891 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:13:41.073409   33891 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:13:41.074854   33891 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 01:13:41.076418   33891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 01:13:41.078426   33891 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:13:41.078585   33891 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 01:13:41.079170   33891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:13:41.079218   33891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:13:41.095221   33891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I0729 01:13:41.095614   33891 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:13:41.096111   33891 main.go:141] libmachine: Using API Version  1
	I0729 01:13:41.096155   33891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:13:41.096459   33891 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:13:41.096665   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:13:41.131890   33891 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 01:13:41.133183   33891 start.go:297] selected driver: kvm2
	I0729 01:13:41.133201   33891 start.go:901] validating driver "kvm2" against &{Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.136 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:13:41.133366   33891 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 01:13:41.133748   33891 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:13:41.133843   33891 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 01:13:41.149414   33891 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 01:13:41.150081   33891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 01:13:41.150110   33891 cni.go:84] Creating CNI manager for ""
	I0729 01:13:41.150116   33891 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 01:13:41.150175   33891 start.go:340] cluster config:
	{Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.136 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:13:41.150298   33891 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:13:41.152318   33891 out.go:177] * Starting "ha-845088" primary control-plane node in "ha-845088" cluster
	I0729 01:13:41.153920   33891 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:13:41.153953   33891 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 01:13:41.153962   33891 cache.go:56] Caching tarball of preloaded images
	I0729 01:13:41.154030   33891 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 01:13:41.154039   33891 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 01:13:41.154154   33891 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/config.json ...
	I0729 01:13:41.154349   33891 start.go:360] acquireMachinesLock for ha-845088: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 01:13:41.154388   33891 start.go:364] duration metric: took 22.178µs to acquireMachinesLock for "ha-845088"
	I0729 01:13:41.154400   33891 start.go:96] Skipping create...Using existing machine configuration
	I0729 01:13:41.154410   33891 fix.go:54] fixHost starting: 
	I0729 01:13:41.154648   33891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:13:41.154681   33891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:13:41.169657   33891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41299
	I0729 01:13:41.170095   33891 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:13:41.170601   33891 main.go:141] libmachine: Using API Version  1
	I0729 01:13:41.170621   33891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:13:41.170997   33891 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:13:41.171214   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:13:41.171384   33891 main.go:141] libmachine: (ha-845088) Calling .GetState
	I0729 01:13:41.173207   33891 fix.go:112] recreateIfNeeded on ha-845088: state=Running err=<nil>
	W0729 01:13:41.173229   33891 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 01:13:41.176294   33891 out.go:177] * Updating the running kvm2 "ha-845088" VM ...
	I0729 01:13:41.177586   33891 machine.go:94] provisionDockerMachine start ...
	I0729 01:13:41.177602   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:13:41.177804   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:13:41.180477   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.180995   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:13:41.181025   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.181203   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:13:41.181386   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.181513   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.181729   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:13:41.181911   33891 main.go:141] libmachine: Using SSH client type: native
	I0729 01:13:41.182085   33891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:13:41.182095   33891 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 01:13:41.288433   33891 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-845088
	
	I0729 01:13:41.288456   33891 main.go:141] libmachine: (ha-845088) Calling .GetMachineName
	I0729 01:13:41.288743   33891 buildroot.go:166] provisioning hostname "ha-845088"
	I0729 01:13:41.288771   33891 main.go:141] libmachine: (ha-845088) Calling .GetMachineName
	I0729 01:13:41.289033   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:13:41.292552   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.293070   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:13:41.293095   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.293360   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:13:41.293567   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.293708   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.293870   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:13:41.294060   33891 main.go:141] libmachine: Using SSH client type: native
	I0729 01:13:41.294244   33891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:13:41.294260   33891 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-845088 && echo "ha-845088" | sudo tee /etc/hostname
	I0729 01:13:41.414837   33891 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-845088
	
	I0729 01:13:41.414865   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:13:41.418065   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.418524   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:13:41.418553   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.418737   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:13:41.418934   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.419134   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.419362   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:13:41.419524   33891 main.go:141] libmachine: Using SSH client type: native
	I0729 01:13:41.419683   33891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:13:41.419708   33891 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-845088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-845088/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-845088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 01:13:41.524747   33891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:13:41.524787   33891 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 01:13:41.524827   33891 buildroot.go:174] setting up certificates
	I0729 01:13:41.524843   33891 provision.go:84] configureAuth start
	I0729 01:13:41.524855   33891 main.go:141] libmachine: (ha-845088) Calling .GetMachineName
	I0729 01:13:41.525113   33891 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:13:41.528098   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.528488   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:13:41.528515   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.528700   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:13:41.531229   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.531672   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:13:41.531697   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.531850   33891 provision.go:143] copyHostCerts
	I0729 01:13:41.531894   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:13:41.531943   33891 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 01:13:41.531959   33891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:13:41.532041   33891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 01:13:41.532151   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:13:41.532178   33891 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 01:13:41.532187   33891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:13:41.532225   33891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 01:13:41.532275   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:13:41.532292   33891 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 01:13:41.532302   33891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:13:41.532326   33891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 01:13:41.532376   33891 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.ha-845088 san=[127.0.0.1 192.168.39.69 ha-845088 localhost minikube]
	I0729 01:13:41.789249   33891 provision.go:177] copyRemoteCerts
	I0729 01:13:41.789301   33891 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 01:13:41.789328   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:13:41.792384   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.792878   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:13:41.792906   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.793193   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:13:41.793396   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.793609   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:13:41.793811   33891 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:13:41.874732   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 01:13:41.874802   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 01:13:41.903865   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 01:13:41.903943   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 01:13:41.929320   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 01:13:41.929386   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 01:13:41.954369   33891 provision.go:87] duration metric: took 429.511564ms to configureAuth
	I0729 01:13:41.954399   33891 buildroot.go:189] setting minikube options for container-runtime
	I0729 01:13:41.954610   33891 config.go:182] Loaded profile config "ha-845088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:13:41.954674   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:13:41.957617   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.958005   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:13:41.958025   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:13:41.958300   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:13:41.958476   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.958597   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:13:41.958722   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:13:41.958862   33891 main.go:141] libmachine: Using SSH client type: native
	I0729 01:13:41.959014   33891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:13:41.959036   33891 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 01:15:12.893835   33891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 01:15:12.893863   33891 machine.go:97] duration metric: took 1m31.71626385s to provisionDockerMachine
	I0729 01:15:12.893879   33891 start.go:293] postStartSetup for "ha-845088" (driver="kvm2")
	I0729 01:15:12.893890   33891 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 01:15:12.893904   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:15:12.894179   33891 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 01:15:12.894208   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:15:12.897243   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:12.897705   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:15:12.897734   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:12.897895   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:15:12.898062   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:15:12.898213   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:15:12.898321   33891 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:15:12.982782   33891 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 01:15:12.987317   33891 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 01:15:12.987347   33891 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 01:15:12.987416   33891 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 01:15:12.987501   33891 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 01:15:12.987512   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /etc/ssl/certs/166232.pem
	I0729 01:15:12.987616   33891 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 01:15:12.997232   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:15:13.022977   33891 start.go:296] duration metric: took 129.083412ms for postStartSetup
	I0729 01:15:13.023085   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:15:13.023384   33891 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 01:15:13.023408   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:15:13.026031   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.026438   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:15:13.026466   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.026683   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:15:13.026875   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:15:13.027071   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:15:13.027215   33891 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	W0729 01:15:13.110767   33891 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 01:15:13.110789   33891 fix.go:56] duration metric: took 1m31.956378994s for fixHost
	I0729 01:15:13.110809   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:15:13.113595   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.113972   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:15:13.113998   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.114223   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:15:13.114390   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:15:13.114536   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:15:13.114704   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:15:13.114991   33891 main.go:141] libmachine: Using SSH client type: native
	I0729 01:15:13.115183   33891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0729 01:15:13.115194   33891 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 01:15:13.220049   33891 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722215713.183881719
	
	I0729 01:15:13.220068   33891 fix.go:216] guest clock: 1722215713.183881719
	I0729 01:15:13.220079   33891 fix.go:229] Guest: 2024-07-29 01:15:13.183881719 +0000 UTC Remote: 2024-07-29 01:15:13.110795249 +0000 UTC m=+92.082182863 (delta=73.08647ms)
	I0729 01:15:13.220109   33891 fix.go:200] guest clock delta is within tolerance: 73.08647ms
	I0729 01:15:13.220115   33891 start.go:83] releasing machines lock for "ha-845088", held for 1m32.065718875s
	I0729 01:15:13.220132   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:15:13.220385   33891 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:15:13.223341   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.223785   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:15:13.223816   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.224062   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:15:13.224596   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:15:13.225173   33891 main.go:141] libmachine: (ha-845088) Calling .DriverName
	I0729 01:15:13.225258   33891 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 01:15:13.225297   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:15:13.225430   33891 ssh_runner.go:195] Run: cat /version.json
	I0729 01:15:13.225451   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHHostname
	I0729 01:15:13.228170   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.228479   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.228550   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:15:13.228573   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.228748   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:15:13.228921   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:15:13.228934   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:15:13.228955   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:13.229113   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHPort
	I0729 01:15:13.229148   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:15:13.229277   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHKeyPath
	I0729 01:15:13.229338   33891 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:15:13.229391   33891 main.go:141] libmachine: (ha-845088) Calling .GetSSHUsername
	I0729 01:15:13.229495   33891 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/ha-845088/id_rsa Username:docker}
	I0729 01:15:13.304555   33891 ssh_runner.go:195] Run: systemctl --version
	I0729 01:15:13.328118   33891 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 01:15:13.491415   33891 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 01:15:13.497418   33891 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 01:15:13.497486   33891 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 01:15:13.506688   33891 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 01:15:13.506712   33891 start.go:495] detecting cgroup driver to use...
	I0729 01:15:13.506784   33891 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 01:15:13.524740   33891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 01:15:13.540083   33891 docker.go:217] disabling cri-docker service (if available) ...
	I0729 01:15:13.540134   33891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 01:15:13.555396   33891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 01:15:13.569364   33891 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 01:15:13.777228   33891 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 01:15:13.972708   33891 docker.go:233] disabling docker service ...
	I0729 01:15:13.972785   33891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 01:15:13.993589   33891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 01:15:14.007199   33891 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 01:15:14.149890   33891 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 01:15:14.298387   33891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 01:15:14.312683   33891 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 01:15:14.332228   33891 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 01:15:14.332301   33891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:15:14.343074   33891 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 01:15:14.343142   33891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:15:14.354656   33891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:15:14.366091   33891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:15:14.377262   33891 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 01:15:14.388980   33891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:15:14.400200   33891 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:15:14.412335   33891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:15:14.423015   33891 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 01:15:14.432791   33891 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 01:15:14.442929   33891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:15:14.587778   33891 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 01:15:14.915901   33891 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 01:15:14.915967   33891 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 01:15:14.922014   33891 start.go:563] Will wait 60s for crictl version
	I0729 01:15:14.922069   33891 ssh_runner.go:195] Run: which crictl
	I0729 01:15:14.926216   33891 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 01:15:14.963918   33891 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 01:15:14.963996   33891 ssh_runner.go:195] Run: crio --version
	I0729 01:15:14.997201   33891 ssh_runner.go:195] Run: crio --version
	I0729 01:15:15.030767   33891 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 01:15:15.031982   33891 main.go:141] libmachine: (ha-845088) Calling .GetIP
	I0729 01:15:15.034562   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:15.035011   33891 main.go:141] libmachine: (ha-845088) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:b1:bc", ip: ""} in network mk-ha-845088: {Iface:virbr1 ExpiryTime:2024-07-29 02:03:26 +0000 UTC Type:0 Mac:52:54:00:9a:b1:bc Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-845088 Clientid:01:52:54:00:9a:b1:bc}
	I0729 01:15:15.035030   33891 main.go:141] libmachine: (ha-845088) DBG | domain ha-845088 has defined IP address 192.168.39.69 and MAC address 52:54:00:9a:b1:bc in network mk-ha-845088
	I0729 01:15:15.035262   33891 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 01:15:15.040458   33891 kubeadm.go:883] updating cluster {Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.136 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 01:15:15.040644   33891 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:15:15.040707   33891 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:15:15.088900   33891 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 01:15:15.088925   33891 crio.go:433] Images already preloaded, skipping extraction
	I0729 01:15:15.088977   33891 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:15:15.129079   33891 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 01:15:15.129100   33891 cache_images.go:84] Images are preloaded, skipping loading
	I0729 01:15:15.129123   33891 kubeadm.go:934] updating node { 192.168.39.69 8443 v1.30.3 crio true true} ...
	I0729 01:15:15.129244   33891 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-845088 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 01:15:15.129324   33891 ssh_runner.go:195] Run: crio config
	I0729 01:15:15.178413   33891 cni.go:84] Creating CNI manager for ""
	I0729 01:15:15.178434   33891 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 01:15:15.178446   33891 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 01:15:15.178483   33891 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-845088 NodeName:ha-845088 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 01:15:15.178655   33891 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-845088"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 01:15:15.178678   33891 kube-vip.go:115] generating kube-vip config ...
	I0729 01:15:15.178734   33891 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 01:15:15.191282   33891 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 01:15:15.191405   33891 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 01:15:15.191456   33891 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 01:15:15.201630   33891 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 01:15:15.201696   33891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 01:15:15.212182   33891 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0729 01:15:15.229290   33891 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 01:15:15.247963   33891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0729 01:15:15.265352   33891 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 01:15:15.282140   33891 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 01:15:15.287505   33891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:15:15.431127   33891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:15:15.446201   33891 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088 for IP: 192.168.39.69
	I0729 01:15:15.446227   33891 certs.go:194] generating shared ca certs ...
	I0729 01:15:15.446244   33891 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:15:15.446389   33891 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 01:15:15.446425   33891 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 01:15:15.446434   33891 certs.go:256] generating profile certs ...
	I0729 01:15:15.446502   33891 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/client.key
	I0729 01:15:15.446528   33891 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.3f3c1d7b
	I0729 01:15:15.446543   33891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.3f3c1d7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.69 192.168.39.68 192.168.39.243 192.168.39.254]
	I0729 01:15:15.642390   33891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.3f3c1d7b ...
	I0729 01:15:15.642418   33891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.3f3c1d7b: {Name:mk3016e5fa4b796d1cce4dd4d789b10ea203a7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:15:15.642577   33891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.3f3c1d7b ...
	I0729 01:15:15.642588   33891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.3f3c1d7b: {Name:mk6b25a1b73ace691e78b028aa7c87f136b36f3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:15:15.642654   33891 certs.go:381] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt.3f3c1d7b -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt
	I0729 01:15:15.642797   33891 certs.go:385] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key.3f3c1d7b -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key
	I0729 01:15:15.642923   33891 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key
	I0729 01:15:15.642937   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 01:15:15.642949   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 01:15:15.642959   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 01:15:15.642970   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 01:15:15.642987   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 01:15:15.643000   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 01:15:15.643015   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 01:15:15.643024   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 01:15:15.643094   33891 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 01:15:15.643122   33891 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 01:15:15.643131   33891 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 01:15:15.643151   33891 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 01:15:15.643171   33891 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 01:15:15.643191   33891 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 01:15:15.643224   33891 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:15:15.643248   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /usr/share/ca-certificates/166232.pem
	I0729 01:15:15.643260   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:15:15.643271   33891 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem -> /usr/share/ca-certificates/16623.pem
	I0729 01:15:15.643823   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 01:15:15.674022   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 01:15:15.698135   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 01:15:15.721436   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 01:15:15.745227   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 01:15:15.768851   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 01:15:15.793098   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 01:15:15.817667   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/ha-845088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 01:15:15.842089   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 01:15:15.867130   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 01:15:15.891233   33891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 01:15:15.914192   33891 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 01:15:15.930927   33891 ssh_runner.go:195] Run: openssl version
	I0729 01:15:15.937005   33891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 01:15:15.948070   33891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 01:15:15.952628   33891 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 01:15:15.952670   33891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 01:15:15.958337   33891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 01:15:15.968779   33891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 01:15:15.979994   33891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:15:15.984574   33891 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:15:15.984641   33891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:15:15.990404   33891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 01:15:15.999963   33891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 01:15:16.010817   33891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 01:15:16.015215   33891 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 01:15:16.015265   33891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 01:15:16.021169   33891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 01:15:16.031224   33891 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 01:15:16.035668   33891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 01:15:16.041397   33891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 01:15:16.047198   33891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 01:15:16.052807   33891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 01:15:16.058641   33891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 01:15:16.065264   33891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 01:15:16.071014   33891 kubeadm.go:392] StartCluster: {Name:ha-845088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-845088 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.68 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.136 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:15:16.071202   33891 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 01:15:16.071277   33891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 01:15:16.117465   33891 cri.go:89] found id: "58a549d922e71ebab3f11fecd9f4cce112027221053d88dc2f10005406e0f06a"
	I0729 01:15:16.117486   33891 cri.go:89] found id: "2dbd4d5717dd2a71e5dc834f2c702d6cdac2ec32e11fdc8b865b3c82760aaf83"
	I0729 01:15:16.117491   33891 cri.go:89] found id: "a741926a458e961c77263e131b90a29509b116bec3e6d34bb71176d3991ca8b1"
	I0729 01:15:16.117494   33891 cri.go:89] found id: "dd54eae7304e5182e5293704abdceb4e9ffd712fa08fad6b3d967463872f0eec"
	I0729 01:15:16.117497   33891 cri.go:89] found id: "102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6"
	I0729 01:15:16.117500   33891 cri.go:89] found id: "4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87"
	I0729 01:15:16.117503   33891 cri.go:89] found id: "b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198"
	I0729 01:15:16.117505   33891 cri.go:89] found id: "ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8"
	I0729 01:15:16.117508   33891 cri.go:89] found id: "994e26254fd085e2926edf9c656aad1b17c748a39170b459396f42bc335f1b37"
	I0729 01:15:16.117513   33891 cri.go:89] found id: "2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46"
	I0729 01:15:16.117515   33891 cri.go:89] found id: "71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60"
	I0729 01:15:16.117518   33891 cri.go:89] found id: "2f0d5f5418f21962309391e2fc61b9ab31ab12afa2e057a4a8bbecf46d934d4c"
	I0729 01:15:16.117520   33891 cri.go:89] found id: "32f40f9b4c14412e1f58e289c0f05c0df36143bb9d0e662b8e6a5ab96bc84ff5"
	I0729 01:15:16.117523   33891 cri.go:89] found id: ""
	I0729 01:15:16.117568   33891 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.835765280Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722216027835741676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1dffe8a2-0e30-442e-a6a8-43903495401e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.836410763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86d389aa-aba1-41b9-907d-1227cbd4c340 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.836471484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86d389aa-aba1-41b9-907d-1227cbd4c340 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.836887917Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e41cca145ab253a77954971d769c9317b115b07993e26b8822e377cd5e4b470,PodSandboxId:325ddf55307428bba049828355bb4f3a8da7d2674b4084d2fe49431592df6ab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722215791131978924,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae848e053a413d6390d563c86e66749f80257ee3338a05474d65c7fe52e17a2,PodSandboxId:f1bfea814196944140223e82dce8f5d94f8da31f83619d64bdbe9d48b76a3d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722215765144806799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a3220dc04b24fbdf00cd236d9a94f8a9523d2f00a3de205e6a608590ddc250,PodSandboxId:fe384baae5f62d9a89cd5161d421dac65e0059cdbe77901e3a4ffb055f7cdc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722215760135714514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817587b77a3ed9265060f97d06c6e55e59c753517dab115f90b210a4d8d4b251,PodSandboxId:e390f2207379f03c434dae5689092c14404b9f9dedbae02015290aca0b8562e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722215752427234696,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annotations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea69b0d5cde22d017f386be1db032ca47eb2c7bdf0c86ee668e1f85c517c3f,PodSandboxId:325ddf55307428bba049828355bb4f3a8da7d2674b4084d2fe49431592df6ab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722215749137128641,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7261b3d0b0caa43d986b0f4aaaa477c3df3dcc59f11701bf55932227ce247b51,PodSandboxId:bd91aa82fefd6dbf7c1924ee2a0fb99798589d5cb7ba93f33537a2e0b3a7bd84,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722215735928395354,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d75d1a8d19882beac04fd6b3dc845a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2a4bee1eb8bca9612f379d84734924c3cc6c1e36233455e8b21499759ad1553,PodSandboxId:cf9d25c8a060c7013be99f7c540b685b3794ee05445bca8ecbf41a8a58854589,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215719779592129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5540fc40e2a7f15b320e56377d094ce83a31f00a2df550fc0c5a34c0a6b53f67,PodSandboxId:ee2397a596835fd0cffabe01aa3c227f7fe3a3e52ea18d69efa156701a52a597,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722215719536943430,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca67b68988769cf0f2629127e88cb8e28d64711c5477e47cbd0260940c95451,PodSandboxId:a9afc28c0b39e871ded2b32cb858626b1742e558e8b1f6a4dba078ba1e4a6c6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722215719296404215,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:416edea5a4ef1589f0f30884e6c1c4c26063ba7ccc13ee4f90d22801464de2ca,PodSandboxId:af895d5082b723f46c1f5697e1281e534712108be9a26161e8b1e4ec797e625e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722215719057892634,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:578dedb8fb465b5c6d85b39b06bd40cf3b76aa6df602d96f9a0bd1167fa5a59c,PodSandboxId:dec544e388e32a7662552f6eea42e54b2a111ef9ad05971e894657d9c226e709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215719220467091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kubernetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5792cd9b8f1980e8adbf4a5b3167ab46c050a8ff4c196487e0288fcb3a808571,PodSandboxId:f1bfea814196944140223e82dce8f5d94f8da31f83619d64bdbe9d48b76a3d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722215719204090539,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98efb6dd5b438577ccc769b4d9d48b9c0c7166de239d9e7f38d2eda3fc94b140,PodSandboxId:6d3182746ca8253a7f59c133facddefc9d27bc3907d151b65a8d3743f6ee3f29,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722215719098835462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d54
1412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d805fa439728f540801adc68dca53128909d2149cfbc2c7c5e877d34560ae3e0,PodSandboxId:fe384baae5f62d9a89cd5161d421dac65e0059cdbe77901e3a4ffb055f7cdc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722215718968496683,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:393f89e96685f53ad45043741e5cdeea2a14ac868361b8ec5d1c99fb7fcb80fd,PodSandboxId:077fc92624630d9345f559e83fcc88623c9c9da78c83f2fd03558dbe231bf392,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722215220872200113,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annot
ations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6,PodSandboxId:860aff47921080f197906689ebdac24d8f2d07ce79c9792da378416aeb0b0556,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722215067520196408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kube
rnetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87,PodSandboxId:5998a0c18499b323d8b2f065294e71b0f1b83d8d7e0689683aa373fd912f2676,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722215067480537315,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198,PodSandboxId:d036858417b617bd3d07094718128ed94a829b79a04481e222a4d007a8cced8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722215055323459428,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8,PodSandboxId:a37edf1e80380d902c014ad30352a41536c6dd919531118f5bfdff6b318b36b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722215050132752424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46,PodSandboxId:00d828e6fd11cbd1fb3e98ce4070370f2935ac47836270d51eb66a8b845ac201,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722215029963625862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60,PodSandboxId:64651fd976b6f146df0a71675e4e22c563cd375d3f5da24cf2a480bc054c63af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722215029937553090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86d389aa-aba1-41b9-907d-1227cbd4c340 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.879291408Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e31424e-6a5d-4514-a4ff-b7843878c307 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.879388778Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e31424e-6a5d-4514-a4ff-b7843878c307 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.881245638Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98cefcc8-a87d-450e-9292-64817890a1f5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.881705103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722216027881683853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98cefcc8-a87d-450e-9292-64817890a1f5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.882191210Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbabf52d-9704-4592-97ad-de27671017ec name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.882488527Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbabf52d-9704-4592-97ad-de27671017ec name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.884079494Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e41cca145ab253a77954971d769c9317b115b07993e26b8822e377cd5e4b470,PodSandboxId:325ddf55307428bba049828355bb4f3a8da7d2674b4084d2fe49431592df6ab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722215791131978924,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae848e053a413d6390d563c86e66749f80257ee3338a05474d65c7fe52e17a2,PodSandboxId:f1bfea814196944140223e82dce8f5d94f8da31f83619d64bdbe9d48b76a3d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722215765144806799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a3220dc04b24fbdf00cd236d9a94f8a9523d2f00a3de205e6a608590ddc250,PodSandboxId:fe384baae5f62d9a89cd5161d421dac65e0059cdbe77901e3a4ffb055f7cdc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722215760135714514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817587b77a3ed9265060f97d06c6e55e59c753517dab115f90b210a4d8d4b251,PodSandboxId:e390f2207379f03c434dae5689092c14404b9f9dedbae02015290aca0b8562e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722215752427234696,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annotations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea69b0d5cde22d017f386be1db032ca47eb2c7bdf0c86ee668e1f85c517c3f,PodSandboxId:325ddf55307428bba049828355bb4f3a8da7d2674b4084d2fe49431592df6ab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722215749137128641,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7261b3d0b0caa43d986b0f4aaaa477c3df3dcc59f11701bf55932227ce247b51,PodSandboxId:bd91aa82fefd6dbf7c1924ee2a0fb99798589d5cb7ba93f33537a2e0b3a7bd84,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722215735928395354,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d75d1a8d19882beac04fd6b3dc845a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2a4bee1eb8bca9612f379d84734924c3cc6c1e36233455e8b21499759ad1553,PodSandboxId:cf9d25c8a060c7013be99f7c540b685b3794ee05445bca8ecbf41a8a58854589,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215719779592129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5540fc40e2a7f15b320e56377d094ce83a31f00a2df550fc0c5a34c0a6b53f67,PodSandboxId:ee2397a596835fd0cffabe01aa3c227f7fe3a3e52ea18d69efa156701a52a597,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722215719536943430,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca67b68988769cf0f2629127e88cb8e28d64711c5477e47cbd0260940c95451,PodSandboxId:a9afc28c0b39e871ded2b32cb858626b1742e558e8b1f6a4dba078ba1e4a6c6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722215719296404215,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:416edea5a4ef1589f0f30884e6c1c4c26063ba7ccc13ee4f90d22801464de2ca,PodSandboxId:af895d5082b723f46c1f5697e1281e534712108be9a26161e8b1e4ec797e625e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722215719057892634,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:578dedb8fb465b5c6d85b39b06bd40cf3b76aa6df602d96f9a0bd1167fa5a59c,PodSandboxId:dec544e388e32a7662552f6eea42e54b2a111ef9ad05971e894657d9c226e709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215719220467091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kubernetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5792cd9b8f1980e8adbf4a5b3167ab46c050a8ff4c196487e0288fcb3a808571,PodSandboxId:f1bfea814196944140223e82dce8f5d94f8da31f83619d64bdbe9d48b76a3d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722215719204090539,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98efb6dd5b438577ccc769b4d9d48b9c0c7166de239d9e7f38d2eda3fc94b140,PodSandboxId:6d3182746ca8253a7f59c133facddefc9d27bc3907d151b65a8d3743f6ee3f29,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722215719098835462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d54
1412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d805fa439728f540801adc68dca53128909d2149cfbc2c7c5e877d34560ae3e0,PodSandboxId:fe384baae5f62d9a89cd5161d421dac65e0059cdbe77901e3a4ffb055f7cdc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722215718968496683,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:393f89e96685f53ad45043741e5cdeea2a14ac868361b8ec5d1c99fb7fcb80fd,PodSandboxId:077fc92624630d9345f559e83fcc88623c9c9da78c83f2fd03558dbe231bf392,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722215220872200113,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annot
ations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6,PodSandboxId:860aff47921080f197906689ebdac24d8f2d07ce79c9792da378416aeb0b0556,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722215067520196408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kube
rnetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87,PodSandboxId:5998a0c18499b323d8b2f065294e71b0f1b83d8d7e0689683aa373fd912f2676,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722215067480537315,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198,PodSandboxId:d036858417b617bd3d07094718128ed94a829b79a04481e222a4d007a8cced8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722215055323459428,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8,PodSandboxId:a37edf1e80380d902c014ad30352a41536c6dd919531118f5bfdff6b318b36b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722215050132752424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46,PodSandboxId:00d828e6fd11cbd1fb3e98ce4070370f2935ac47836270d51eb66a8b845ac201,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722215029963625862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60,PodSandboxId:64651fd976b6f146df0a71675e4e22c563cd375d3f5da24cf2a480bc054c63af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722215029937553090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bbabf52d-9704-4592-97ad-de27671017ec name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.928111776Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b49a7ea8-4e3d-4482-8b5f-40b66be2cd37 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.928222418Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b49a7ea8-4e3d-4482-8b5f-40b66be2cd37 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.929410331Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26a72fd8-1d08-4d0d-b8fe-e231984c26ff name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.929852175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722216027929828800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26a72fd8-1d08-4d0d-b8fe-e231984c26ff name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.930411449Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af830930-20eb-47ec-a5df-ae65c39fc37d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.930470548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af830930-20eb-47ec-a5df-ae65c39fc37d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.931115103Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e41cca145ab253a77954971d769c9317b115b07993e26b8822e377cd5e4b470,PodSandboxId:325ddf55307428bba049828355bb4f3a8da7d2674b4084d2fe49431592df6ab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722215791131978924,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae848e053a413d6390d563c86e66749f80257ee3338a05474d65c7fe52e17a2,PodSandboxId:f1bfea814196944140223e82dce8f5d94f8da31f83619d64bdbe9d48b76a3d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722215765144806799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a3220dc04b24fbdf00cd236d9a94f8a9523d2f00a3de205e6a608590ddc250,PodSandboxId:fe384baae5f62d9a89cd5161d421dac65e0059cdbe77901e3a4ffb055f7cdc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722215760135714514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817587b77a3ed9265060f97d06c6e55e59c753517dab115f90b210a4d8d4b251,PodSandboxId:e390f2207379f03c434dae5689092c14404b9f9dedbae02015290aca0b8562e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722215752427234696,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annotations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea69b0d5cde22d017f386be1db032ca47eb2c7bdf0c86ee668e1f85c517c3f,PodSandboxId:325ddf55307428bba049828355bb4f3a8da7d2674b4084d2fe49431592df6ab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722215749137128641,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7261b3d0b0caa43d986b0f4aaaa477c3df3dcc59f11701bf55932227ce247b51,PodSandboxId:bd91aa82fefd6dbf7c1924ee2a0fb99798589d5cb7ba93f33537a2e0b3a7bd84,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722215735928395354,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d75d1a8d19882beac04fd6b3dc845a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2a4bee1eb8bca9612f379d84734924c3cc6c1e36233455e8b21499759ad1553,PodSandboxId:cf9d25c8a060c7013be99f7c540b685b3794ee05445bca8ecbf41a8a58854589,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215719779592129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5540fc40e2a7f15b320e56377d094ce83a31f00a2df550fc0c5a34c0a6b53f67,PodSandboxId:ee2397a596835fd0cffabe01aa3c227f7fe3a3e52ea18d69efa156701a52a597,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722215719536943430,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca67b68988769cf0f2629127e88cb8e28d64711c5477e47cbd0260940c95451,PodSandboxId:a9afc28c0b39e871ded2b32cb858626b1742e558e8b1f6a4dba078ba1e4a6c6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722215719296404215,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:416edea5a4ef1589f0f30884e6c1c4c26063ba7ccc13ee4f90d22801464de2ca,PodSandboxId:af895d5082b723f46c1f5697e1281e534712108be9a26161e8b1e4ec797e625e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722215719057892634,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:578dedb8fb465b5c6d85b39b06bd40cf3b76aa6df602d96f9a0bd1167fa5a59c,PodSandboxId:dec544e388e32a7662552f6eea42e54b2a111ef9ad05971e894657d9c226e709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215719220467091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kubernetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5792cd9b8f1980e8adbf4a5b3167ab46c050a8ff4c196487e0288fcb3a808571,PodSandboxId:f1bfea814196944140223e82dce8f5d94f8da31f83619d64bdbe9d48b76a3d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722215719204090539,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98efb6dd5b438577ccc769b4d9d48b9c0c7166de239d9e7f38d2eda3fc94b140,PodSandboxId:6d3182746ca8253a7f59c133facddefc9d27bc3907d151b65a8d3743f6ee3f29,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722215719098835462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d54
1412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d805fa439728f540801adc68dca53128909d2149cfbc2c7c5e877d34560ae3e0,PodSandboxId:fe384baae5f62d9a89cd5161d421dac65e0059cdbe77901e3a4ffb055f7cdc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722215718968496683,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:393f89e96685f53ad45043741e5cdeea2a14ac868361b8ec5d1c99fb7fcb80fd,PodSandboxId:077fc92624630d9345f559e83fcc88623c9c9da78c83f2fd03558dbe231bf392,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722215220872200113,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annot
ations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6,PodSandboxId:860aff47921080f197906689ebdac24d8f2d07ce79c9792da378416aeb0b0556,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722215067520196408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kube
rnetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87,PodSandboxId:5998a0c18499b323d8b2f065294e71b0f1b83d8d7e0689683aa373fd912f2676,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722215067480537315,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198,PodSandboxId:d036858417b617bd3d07094718128ed94a829b79a04481e222a4d007a8cced8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722215055323459428,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8,PodSandboxId:a37edf1e80380d902c014ad30352a41536c6dd919531118f5bfdff6b318b36b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722215050132752424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46,PodSandboxId:00d828e6fd11cbd1fb3e98ce4070370f2935ac47836270d51eb66a8b845ac201,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722215029963625862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60,PodSandboxId:64651fd976b6f146df0a71675e4e22c563cd375d3f5da24cf2a480bc054c63af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722215029937553090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af830930-20eb-47ec-a5df-ae65c39fc37d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.975121011Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6b01343-e22d-4d83-bc6b-ff0993ee8ae4 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.975223500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6b01343-e22d-4d83-bc6b-ff0993ee8ae4 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.976360107Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ac3ccaa-62fa-48bc-97f1-ae96cc484ed0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.977210363Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722216027977183317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ac3ccaa-62fa-48bc-97f1-ae96cc484ed0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.977647743Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2eb6c5a0-283b-4ed5-8d69-93cf035c5a72 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.977937383Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2eb6c5a0-283b-4ed5-8d69-93cf035c5a72 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:20:27 ha-845088 crio[3820]: time="2024-07-29 01:20:27.978962952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e41cca145ab253a77954971d769c9317b115b07993e26b8822e377cd5e4b470,PodSandboxId:325ddf55307428bba049828355bb4f3a8da7d2674b4084d2fe49431592df6ab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722215791131978924,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae848e053a413d6390d563c86e66749f80257ee3338a05474d65c7fe52e17a2,PodSandboxId:f1bfea814196944140223e82dce8f5d94f8da31f83619d64bdbe9d48b76a3d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722215765144806799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a3220dc04b24fbdf00cd236d9a94f8a9523d2f00a3de205e6a608590ddc250,PodSandboxId:fe384baae5f62d9a89cd5161d421dac65e0059cdbe77901e3a4ffb055f7cdc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722215760135714514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Annotations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817587b77a3ed9265060f97d06c6e55e59c753517dab115f90b210a4d8d4b251,PodSandboxId:e390f2207379f03c434dae5689092c14404b9f9dedbae02015290aca0b8562e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722215752427234696,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annotations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cea69b0d5cde22d017f386be1db032ca47eb2c7bdf0c86ee668e1f85c517c3f,PodSandboxId:325ddf55307428bba049828355bb4f3a8da7d2674b4084d2fe49431592df6ab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722215749137128641,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b770bc2-7368-4b86-89ff-399d60f17817,},Annotations:map[string]string{io.kubernetes.container.hash: d06bb5d0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7261b3d0b0caa43d986b0f4aaaa477c3df3dcc59f11701bf55932227ce247b51,PodSandboxId:bd91aa82fefd6dbf7c1924ee2a0fb99798589d5cb7ba93f33537a2e0b3a7bd84,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722215735928395354,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d75d1a8d19882beac04fd6b3dc845a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2a4bee1eb8bca9612f379d84734924c3cc6c1e36233455e8b21499759ad1553,PodSandboxId:cf9d25c8a060c7013be99f7c540b685b3794ee05445bca8ecbf41a8a58854589,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215719779592129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5540fc40e2a7f15b320e56377d094ce83a31f00a2df550fc0c5a34c0a6b53f67,PodSandboxId:ee2397a596835fd0cffabe01aa3c227f7fe3a3e52ea18d69efa156701a52a597,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722215719536943430,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca67b68988769cf0f2629127e88cb8e28d64711c5477e47cbd0260940c95451,PodSandboxId:a9afc28c0b39e871ded2b32cb858626b1742e558e8b1f6a4dba078ba1e4a6c6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722215719296404215,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:416edea5a4ef1589f0f30884e6c1c4c26063ba7ccc13ee4f90d22801464de2ca,PodSandboxId:af895d5082b723f46c1f5697e1281e534712108be9a26161e8b1e4ec797e625e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722215719057892634,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:578dedb8fb465b5c6d85b39b06bd40cf3b76aa6df602d96f9a0bd1167fa5a59c,PodSandboxId:dec544e388e32a7662552f6eea42e54b2a111ef9ad05971e894657d9c226e709,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722215719220467091,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kubernetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5792cd9b8f1980e8adbf4a5b3167ab46c050a8ff4c196487e0288fcb3a808571,PodSandboxId:f1bfea814196944140223e82dce8f5d94f8da31f83619d64bdbe9d48b76a3d4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722215719204090539,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-845088,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 8a82577ef7e027cb45d5457528698a5d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98efb6dd5b438577ccc769b4d9d48b9c0c7166de239d9e7f38d2eda3fc94b140,PodSandboxId:6d3182746ca8253a7f59c133facddefc9d27bc3907d151b65a8d3743f6ee3f29,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722215719098835462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d54
1412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d805fa439728f540801adc68dca53128909d2149cfbc2c7c5e877d34560ae3e0,PodSandboxId:fe384baae5f62d9a89cd5161d421dac65e0059cdbe77901e3a4ffb055f7cdc12,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722215718968496683,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2688c12ddc0a5ab7af0b9dd884185c58,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4a68477b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:393f89e96685f53ad45043741e5cdeea2a14ac868361b8ec5d1c99fb7fcb80fd,PodSandboxId:077fc92624630d9345f559e83fcc88623c9c9da78c83f2fd03558dbe231bf392,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722215220872200113,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-kdxhf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d626cc7-0294-43eb-903b-83ee7ea03f3d,},Annot
ations:map[string]string{io.kubernetes.container.hash: dc70b4e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6,PodSandboxId:860aff47921080f197906689ebdac24d8f2d07ce79c9792da378416aeb0b0556,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722215067520196408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-26phs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa00166-935c-4e30-899d-0ae105083984,},Annotations:map[string]string{io.kube
rnetes.container.hash: eadc8a89,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87,PodSandboxId:5998a0c18499b323d8b2f065294e71b0f1b83d8d7e0689683aa373fd912f2676,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722215067480537315,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x4jjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 659a9fc3-a597-401d-9ceb-71a04f049d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 525490bc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198,PodSandboxId:d036858417b617bd3d07094718128ed94a829b79a04481e222a4d007a8cced8a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722215055323459428,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jz7gr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d184fd2-5bfc-40bd-b7b3-98934d58a689,},Annotations:map[string]string{io.kubernetes.container.hash: df48a283,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8,PodSandboxId:a37edf1e80380d902c014ad30352a41536c6dd919531118f5bfdff6b318b36b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722215050132752424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tmzt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e92bb0-87c0-4d4e-ae34-d67970a61dc9,},Annotations:map[string]string{io.kubernetes.container.hash: d90c106c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46,PodSandboxId:00d828e6fd11cbd1fb3e98ce4070370f2935ac47836270d51eb66a8b845ac201,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722215029963625862,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06d8c918adf1d541412dd0e3ab48df0,},Annotations:map[string]string{io.kubernetes.container.hash: 56cd2528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60,PodSandboxId:64651fd976b6f146df0a71675e4e22c563cd375d3f5da24cf2a480bc054c63af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722215029937553090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-845088,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f94015277f1fa93b4433220cb8f47a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2eb6c5a0-283b-4ed5-8d69-93cf035c5a72 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4e41cca145ab2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   325ddf5530742       storage-provisioner
	6ae848e053a41       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   f1bfea8141969       kube-controller-manager-ha-845088
	c6a3220dc04b2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   fe384baae5f62       kube-apiserver-ha-845088
	817587b77a3ed       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   e390f2207379f       busybox-fc5497c4f-kdxhf
	7cea69b0d5cde       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   325ddf5530742       storage-provisioner
	7261b3d0b0caa       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   bd91aa82fefd6       kube-vip-ha-845088
	b2a4bee1eb8bc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   cf9d25c8a060c       coredns-7db6d8ff4d-x4jjj
	5540fc40e2a7f       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      5 minutes ago       Running             kindnet-cni               1                   ee2397a596835       kindnet-jz7gr
	8ca67b6898876       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   a9afc28c0b39e       kube-proxy-tmzt7
	578dedb8fb465       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   dec544e388e32       coredns-7db6d8ff4d-26phs
	5792cd9b8f198       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   f1bfea8141969       kube-controller-manager-ha-845088
	98efb6dd5b438       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   6d3182746ca82       etcd-ha-845088
	416edea5a4ef1       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   af895d5082b72       kube-scheduler-ha-845088
	d805fa439728f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   fe384baae5f62       kube-apiserver-ha-845088
	393f89e96685f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   077fc92624630       busybox-fc5497c4f-kdxhf
	102a2205a11ac       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   860aff4792108       coredns-7db6d8ff4d-26phs
	4c9a1e2ce8399       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   5998a0c18499b       coredns-7db6d8ff4d-x4jjj
	b117823d9ea03       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    16 minutes ago      Exited              kindnet-cni               0                   d036858417b61       kindnet-jz7gr
	ba58523a71dfb       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   a37edf1e80380       kube-proxy-tmzt7
	2d545f40bcf5d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   00d828e6fd11c       etcd-ha-845088
	71cb29192a2ff       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   64651fd976b6f       kube-scheduler-ha-845088
	
	
	==> coredns [102a2205a11ac77c8a342be6c808b5351fa5781160d857e9ff04b4d2d6a5dbc6] <==
	[INFO] 10.244.0.4:56145 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107111s
	[INFO] 10.244.0.4:49547 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013737s
	[INFO] 10.244.2.2:50551 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157425s
	[INFO] 10.244.2.2:54720 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002000849s
	[INFO] 10.244.2.2:46977 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133922s
	[INFO] 10.244.2.2:52278 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098427s
	[INFO] 10.244.2.2:33523 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166768s
	[INFO] 10.244.2.2:56762 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127309s
	[INFO] 10.244.1.2:60690 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162836s
	[INFO] 10.244.0.4:53481 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125124s
	[INFO] 10.244.0.4:36302 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006046s
	[INFO] 10.244.2.2:51131 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200754s
	[INFO] 10.244.2.2:35216 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135186s
	[INFO] 10.244.2.2:47188 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095941s
	[INFO] 10.244.2.2:45175 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088023s
	[INFO] 10.244.1.2:53946 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000271227s
	[INFO] 10.244.0.4:35507 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089711s
	[INFO] 10.244.0.4:48138 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000191709s
	[INFO] 10.244.2.2:46681 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084718s
	[INFO] 10.244.2.2:58403 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000190529s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4c9a1e2ce8399f5810ce0c70fb535658a417344a1f17e9c1d1cb7e34563f4e87] <==
	[INFO] 10.244.1.2:54896 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000275281s
	[INFO] 10.244.1.2:36709 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149351s
	[INFO] 10.244.1.2:35599 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014616s
	[INFO] 10.244.1.2:40232 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145035s
	[INFO] 10.244.0.4:42879 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002077041s
	[INFO] 10.244.0.4:46236 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001377262s
	[INFO] 10.244.2.2:60143 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018397s
	[INFO] 10.244.2.2:33059 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001229041s
	[INFO] 10.244.1.2:50949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114887s
	[INFO] 10.244.1.2:41895 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099234s
	[INFO] 10.244.1.2:57885 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008087s
	[INFO] 10.244.0.4:46809 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000202377s
	[INFO] 10.244.0.4:54702 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067695s
	[INFO] 10.244.1.2:33676 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193639s
	[INFO] 10.244.1.2:35018 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014376s
	[INFO] 10.244.1.2:58362 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164011s
	[INFO] 10.244.0.4:42745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108289s
	[INFO] 10.244.0.4:38059 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000080482s
	[INFO] 10.244.2.2:57416 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132756s
	[INFO] 10.244.2.2:34696 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000282968s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [578dedb8fb465b5c6d85b39b06bd40cf3b76aa6df602d96f9a0bd1167fa5a59c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41280->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1018947224]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:15:31.266) (total time: 10505ms):
	Trace[1018947224]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41280->10.96.0.1:443: read: connection reset by peer 10504ms (01:15:41.771)
	Trace[1018947224]: [10.505018215s] [10.505018215s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41280->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b2a4bee1eb8bca9612f379d84734924c3cc6c1e36233455e8b21499759ad1553] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42726->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42726->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42724->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1836866381]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:15:31.698) (total time: 10072ms):
	Trace[1836866381]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42724->10.96.0.1:443: read: connection reset by peer 10072ms (01:15:41.770)
	Trace[1836866381]: [10.072483696s] [10.072483696s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42724->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-845088
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-845088
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=ha-845088
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T01_03_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:03:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-845088
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:20:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:19:04 +0000   Mon, 29 Jul 2024 01:19:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:19:04 +0000   Mon, 29 Jul 2024 01:19:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:19:04 +0000   Mon, 29 Jul 2024 01:19:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:19:04 +0000   Mon, 29 Jul 2024 01:19:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    ha-845088
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbb04d72e92946e88c1da68d30c7bef3
	  System UUID:                fbb04d72-e929-46e8-8c1d-a68d30c7bef3
	  Boot ID:                    8609abf0-fb2f-4316-bc25-edde00b876e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kdxhf              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-26phs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-x4jjj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-845088                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-jz7gr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-845088             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-845088    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-tmzt7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-845088             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-845088                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 4m23s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           16m                    node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	  Warning  ContainerGCFailed        5m32s (x2 over 6m32s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m20s                  node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	  Normal   RegisteredNode           4m12s                  node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	  Normal   RegisteredNode           3m7s                   node-controller  Node ha-845088 event: Registered Node ha-845088 in Controller
	  Normal   NodeNotReady             109s                   node-controller  Node ha-845088 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     84s (x2 over 16m)      kubelet          Node ha-845088 status is now: NodeHasSufficientPID
	  Normal   NodeReady                84s (x2 over 16m)      kubelet          Node ha-845088 status is now: NodeReady
	  Normal   NodeHasNoDiskPressure    84s (x2 over 16m)      kubelet          Node ha-845088 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  84s (x2 over 16m)      kubelet          Node ha-845088 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-845088-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-845088-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=ha-845088
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T01_05_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:05:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-845088-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:20:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:19:01 +0000   Mon, 29 Jul 2024 01:19:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:19:01 +0000   Mon, 29 Jul 2024 01:19:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:19:01 +0000   Mon, 29 Jul 2024 01:19:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:19:01 +0000   Mon, 29 Jul 2024 01:19:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-845088-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 71d77df4f03a4876b498a96bcef9ff64
	  System UUID:                71d77df4-f03a-4876-b498-a96bcef9ff64
	  Boot ID:                    9a5d441a-4671-4485-9dfe-2906c2e77a95
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dbfgn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-845088-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-p87gx                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-845088-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-845088-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-j6gxl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-845088-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-845088-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 4m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-845088-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           15m                    node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-845088-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-845088-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                    node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-845088-m02 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  4m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    4m50s (x8 over 4m51s)  kubelet          Node ha-845088-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m50s (x8 over 4m51s)  kubelet          Node ha-845088-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m50s (x7 over 4m51s)  kubelet          Node ha-845088-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	  Normal  RegisteredNode           3m7s                   node-controller  Node ha-845088-m02 event: Registered Node ha-845088-m02 in Controller
	  Normal  NodeNotReady             109s                   node-controller  Node ha-845088-m02 status is now: NodeNotReady
	
	
	Name:               ha-845088-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-845088-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=ha-845088
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T01_07_37_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:07:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-845088-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:18:00 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 01:17:39 +0000   Mon, 29 Jul 2024 01:18:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 01:17:39 +0000   Mon, 29 Jul 2024 01:18:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 01:17:39 +0000   Mon, 29 Jul 2024 01:18:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 01:17:39 +0000   Mon, 29 Jul 2024 01:18:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.136
	  Hostname:    ha-845088-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f15978c17b794a0dab280aaa8e6fe8a4
	  System UUID:                f15978c1-7b79-4a0d-ab28-0aaa8e6fe8a4
	  Boot ID:                    a8fabdf9-eba1-4579-ba9a-6e7ee437c264
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-85fmb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 kindnet-rffd2              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-bbp9f           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-845088-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-845088-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-845088-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-845088-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m19s                  node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal   RegisteredNode           4m12s                  node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal   NodeNotReady             3m39s                  node-controller  Node ha-845088-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m7s                   node-controller  Node ha-845088-m04 event: Registered Node ha-845088-m04 in Controller
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m49s (x3 over 2m49s)  kubelet          Node ha-845088-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x3 over 2m49s)  kubelet          Node ha-845088-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x3 over 2m49s)  kubelet          Node ha-845088-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m49s (x2 over 2m49s)  kubelet          Node ha-845088-m04 has been rebooted, boot id: a8fabdf9-eba1-4579-ba9a-6e7ee437c264
	  Normal   NodeReady                2m49s (x2 over 2m49s)  kubelet          Node ha-845088-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s                   node-controller  Node ha-845088-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.177713] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.054473] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057858] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.159603] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.120915] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.261683] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.164596] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +4.624660] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.060939] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.270727] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.083870] kauditd_printk_skb: 79 callbacks suppressed
	[Jul29 01:04] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.392423] kauditd_printk_skb: 29 callbacks suppressed
	[Jul29 01:05] kauditd_printk_skb: 24 callbacks suppressed
	[Jul29 01:15] systemd-fstab-generator[3679]: Ignoring "noauto" option for root device
	[  +0.221001] systemd-fstab-generator[3743]: Ignoring "noauto" option for root device
	[  +0.194798] systemd-fstab-generator[3766]: Ignoring "noauto" option for root device
	[  +0.141555] systemd-fstab-generator[3778]: Ignoring "noauto" option for root device
	[  +0.290189] systemd-fstab-generator[3806]: Ignoring "noauto" option for root device
	[  +0.847701] systemd-fstab-generator[3916]: Ignoring "noauto" option for root device
	[  +3.511612] kauditd_printk_skb: 140 callbacks suppressed
	[  +5.187104] kauditd_printk_skb: 84 callbacks suppressed
	[ +32.214030] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [2d545f40bcf5d44e5844fae202896d7fd8c6e497a742f0403fb95a08f2bf5c46] <==
	{"level":"info","ts":"2024-07-29T01:13:42.135812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T01:13:42.135847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T01:13:42.135858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b received MsgPreVoteResp from 9199217ddd03919b at term 2"}
	{"level":"info","ts":"2024-07-29T01:13:42.135871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b [logterm: 2, index: 2317] sent MsgPreVote request to 3ba77f52b23533d8 at term 2"}
	{"level":"info","ts":"2024-07-29T01:13:42.135878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b [logterm: 2, index: 2317] sent MsgPreVote request to 971410e140380cd2 at term 2"}
	{"level":"warn","ts":"2024-07-29T01:13:42.172584Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.69:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T01:13:42.172693Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.69:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T01:13:42.172792Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"9199217ddd03919b","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T01:13:42.173149Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"971410e140380cd2"}
	{"level":"info","ts":"2024-07-29T01:13:42.173273Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"971410e140380cd2"}
	{"level":"info","ts":"2024-07-29T01:13:42.173358Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"971410e140380cd2"}
	{"level":"info","ts":"2024-07-29T01:13:42.173543Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"9199217ddd03919b","remote-peer-id":"971410e140380cd2"}
	{"level":"info","ts":"2024-07-29T01:13:42.173659Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9199217ddd03919b","remote-peer-id":"971410e140380cd2"}
	{"level":"info","ts":"2024-07-29T01:13:42.17379Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"9199217ddd03919b","remote-peer-id":"971410e140380cd2"}
	{"level":"info","ts":"2024-07-29T01:13:42.173839Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"971410e140380cd2"}
	{"level":"info","ts":"2024-07-29T01:13:42.173851Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:13:42.173865Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:13:42.173921Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:13:42.174096Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"9199217ddd03919b","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:13:42.174165Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9199217ddd03919b","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:13:42.174262Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"9199217ddd03919b","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:13:42.174306Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:13:42.178672Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-07-29T01:13:42.17909Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.69:2380"}
	{"level":"info","ts":"2024-07-29T01:13:42.179123Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-845088","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.69:2380"],"advertise-client-urls":["https://192.168.39.69:2379"]}
	
	
	==> etcd [98efb6dd5b438577ccc769b4d9d48b9c0c7166de239d9e7f38d2eda3fc94b140] <==
	{"level":"info","ts":"2024-07-29T01:17:03.16061Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"9199217ddd03919b","to":"3ba77f52b23533d8","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T01:17:03.160709Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"9199217ddd03919b","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:17:06.264837Z","caller":"traceutil/trace.go:171","msg":"trace[949785076] linearizableReadLoop","detail":"{readStateIndex:2881; appliedIndex:2881; }","duration":"118.865621ms","start":"2024-07-29T01:17:06.14593Z","end":"2024-07-29T01:17:06.264795Z","steps":["trace[949785076] 'read index received'  (duration: 118.860181ms)","trace[949785076] 'applied index is now lower than readState.Index'  (duration: 4.221µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T01:17:06.265361Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.334957ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-845088-m03\" ","response":"range_response_count:1 size:5801"}
	{"level":"info","ts":"2024-07-29T01:17:06.26546Z","caller":"traceutil/trace.go:171","msg":"trace[350761670] transaction","detail":"{read_only:false; response_revision:2482; number_of_response:1; }","duration":"133.606729ms","start":"2024-07-29T01:17:06.131826Z","end":"2024-07-29T01:17:06.265432Z","steps":["trace[350761670] 'process raft request'  (duration: 133.109155ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T01:17:06.265595Z","caller":"traceutil/trace.go:171","msg":"trace[1628044369] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-845088-m03; range_end:; response_count:1; response_revision:2481; }","duration":"119.563487ms","start":"2024-07-29T01:17:06.145924Z","end":"2024-07-29T01:17:06.265488Z","steps":["trace[1628044369] 'agreement among raft nodes before linearized reading'  (duration: 119.05081ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T01:17:53.292103Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.243:33218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-29T01:17:53.304529Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.243:33222","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-07-29T01:17:53.316725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9199217ddd03919b switched to configuration voters=(10491453631398908315 10886344758892432594)"}
	{"level":"info","ts":"2024-07-29T01:17:53.319433Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"6c21f62219c1156b","local-member-id":"9199217ddd03919b","removed-remote-peer-id":"3ba77f52b23533d8","removed-remote-peer-urls":["https://192.168.39.243:2380"]}
	{"level":"info","ts":"2024-07-29T01:17:53.319622Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"warn","ts":"2024-07-29T01:17:53.320511Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:17:53.320649Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"warn","ts":"2024-07-29T01:17:53.321287Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:17:53.321362Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:17:53.321435Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"9199217ddd03919b","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"warn","ts":"2024-07-29T01:17:53.321746Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9199217ddd03919b","remote-peer-id":"3ba77f52b23533d8","error":"context canceled"}
	{"level":"warn","ts":"2024-07-29T01:17:53.321826Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"3ba77f52b23533d8","error":"failed to read 3ba77f52b23533d8 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-29T01:17:53.321881Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9199217ddd03919b","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"warn","ts":"2024-07-29T01:17:53.322152Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"9199217ddd03919b","remote-peer-id":"3ba77f52b23533d8","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T01:17:53.322215Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"9199217ddd03919b","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:17:53.322255Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3ba77f52b23533d8"}
	{"level":"info","ts":"2024-07-29T01:17:53.322292Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"9199217ddd03919b","removed-remote-peer-id":"3ba77f52b23533d8"}
	{"level":"warn","ts":"2024-07-29T01:17:53.335515Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"9199217ddd03919b","remote-peer-id-stream-handler":"9199217ddd03919b","remote-peer-id-from":"3ba77f52b23533d8"}
	{"level":"warn","ts":"2024-07-29T01:17:53.340759Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.243:43164","server-name":"","error":"read tcp 192.168.39.69:2380->192.168.39.243:43164: read: connection reset by peer"}
	
	
	==> kernel <==
	 01:20:28 up 17 min,  0 users,  load average: 0.25, 0.33, 0.27
	Linux ha-845088 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5540fc40e2a7f15b320e56377d094ce83a31f00a2df550fc0c5a34c0a6b53f67] <==
	I0729 01:19:40.637130       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:19:50.632864       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:19:50.632962       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:19:50.633214       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:19:50.633266       1 main.go:299] handling current node
	I0729 01:19:50.633292       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:19:50.633310       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:20:00.640360       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:20:00.640392       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:20:00.640543       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:20:00.640568       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:20:00.640634       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:20:00.640656       1 main.go:299] handling current node
	I0729 01:20:10.640391       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:20:10.640496       1 main.go:299] handling current node
	I0729 01:20:10.640524       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:20:10.640542       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:20:10.640715       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:20:10.640742       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:20:20.632472       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:20:20.632575       1 main.go:299] handling current node
	I0729 01:20:20.632604       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:20:20.632628       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:20:20.632764       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:20:20.632794       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [b117823d9ea03de188eac3320a7ea70749a5271ab35a1a1453273051803d5198] <==
	I0729 01:13:06.416513       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:13:16.406930       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:13:16.407058       1 main.go:299] handling current node
	I0729 01:13:16.407088       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:13:16.407094       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:13:16.407363       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0729 01:13:16.407372       1 main.go:322] Node ha-845088-m03 has CIDR [10.244.2.0/24] 
	I0729 01:13:16.407437       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:13:16.407442       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:13:26.408274       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0729 01:13:26.408380       1 main.go:322] Node ha-845088-m03 has CIDR [10.244.2.0/24] 
	I0729 01:13:26.408793       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:13:26.408837       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	I0729 01:13:26.409091       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:13:26.409102       1 main.go:299] handling current node
	I0729 01:13:26.409115       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:13:26.409119       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:13:36.414695       1 main.go:295] Handling node with IPs: map[192.168.39.69:{}]
	I0729 01:13:36.414759       1 main.go:299] handling current node
	I0729 01:13:36.414788       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0729 01:13:36.414793       1 main.go:322] Node ha-845088-m02 has CIDR [10.244.1.0/24] 
	I0729 01:13:36.414954       1 main.go:295] Handling node with IPs: map[192.168.39.243:{}]
	I0729 01:13:36.414994       1 main.go:322] Node ha-845088-m03 has CIDR [10.244.2.0/24] 
	I0729 01:13:36.415136       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0729 01:13:36.415162       1 main.go:322] Node ha-845088-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c6a3220dc04b24fbdf00cd236d9a94f8a9523d2f00a3de205e6a608590ddc250] <==
	I0729 01:16:01.966212       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 01:16:01.966329       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 01:16:02.023328       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 01:16:02.023364       1 policy_source.go:224] refreshing policies
	I0729 01:16:02.040529       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 01:16:02.045347       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 01:16:02.045678       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 01:16:02.047522       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 01:16:02.047554       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 01:16:02.047632       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 01:16:02.055579       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 01:16:02.057780       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 01:16:02.057913       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 01:16:02.057952       1 aggregator.go:165] initial CRD sync complete...
	I0729 01:16:02.057981       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 01:16:02.057988       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 01:16:02.057995       1 cache.go:39] Caches are synced for autoregister controller
	I0729 01:16:02.064082       1 shared_informer.go:320] Caches are synced for node_authorizer
	W0729 01:16:02.067833       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.243 192.168.39.68]
	I0729 01:16:02.070389       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 01:16:02.080124       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0729 01:16:02.086998       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0729 01:16:02.965177       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 01:16:03.419551       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.243 192.168.39.68 192.168.39.69]
	W0729 01:16:13.418763       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.68 192.168.39.69]
	
	
	==> kube-apiserver [d805fa439728f540801adc68dca53128909d2149cfbc2c7c5e877d34560ae3e0] <==
	I0729 01:15:19.725379       1 options.go:221] external host was not specified, using 192.168.39.69
	I0729 01:15:19.726409       1 server.go:148] Version: v1.30.3
	I0729 01:15:19.726460       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:15:20.604975       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 01:15:20.654084       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 01:15:20.655191       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 01:15:20.655257       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 01:15:20.655631       1 instance.go:299] Using reconciler: lease
	W0729 01:15:40.605672       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0729 01:15:40.605917       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0729 01:15:40.661089       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [5792cd9b8f1980e8adbf4a5b3167ab46c050a8ff4c196487e0288fcb3a808571] <==
	I0729 01:15:20.866207       1 serving.go:380] Generated self-signed cert in-memory
	I0729 01:15:21.295598       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 01:15:21.295637       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:15:21.297290       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 01:15:21.297932       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 01:15:21.298104       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 01:15:21.298184       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 01:15:41.669127       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.69:8443/healthz\": dial tcp 192.168.39.69:8443: connect: connection refused"
	
	
	==> kube-controller-manager [6ae848e053a413d6390d563c86e66749f80257ee3338a05474d65c7fe52e17a2] <==
	I0729 01:18:39.460633       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.723097ms"
	I0729 01:18:39.483108       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.354081ms"
	I0729 01:18:39.483479       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.752µs"
	I0729 01:18:39.503519       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.644669ms"
	I0729 01:18:39.504683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.75µs"
	I0729 01:18:39.546577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.074629ms"
	I0729 01:18:39.547721       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.551µs"
	I0729 01:18:39.756299       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-xmlfm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-xmlfm\": the object has been modified; please apply your changes to the latest version and try again"
	I0729 01:18:39.757716       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"05ed8c70-6ebe-4528-af13-063d52719c0e", APIVersion:"v1", ResourceVersion:"252", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-xmlfm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-xmlfm": the object has been modified; please apply your changes to the latest version and try again
	I0729 01:18:39.792790       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.175446ms"
	I0729 01:18:39.792919       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.812µs"
	I0729 01:18:41.841261       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0729 01:18:41.855500       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.423592ms"
	I0729 01:18:41.855925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.67µs"
	I0729 01:18:56.248196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.906098ms"
	I0729 01:18:56.254108       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.413059ms"
	I0729 01:18:56.398948       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-xmlfm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-xmlfm\": the object has been modified; please apply your changes to the latest version and try again"
	I0729 01:18:56.403535       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"05ed8c70-6ebe-4528-af13-063d52719c0e", APIVersion:"v1", ResourceVersion:"252", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-xmlfm EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-xmlfm": the object has been modified; please apply your changes to the latest version and try again
	I0729 01:18:56.456099       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.197093ms"
	I0729 01:18:56.456875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="148.094µs"
	I0729 01:18:56.491133       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.174916ms"
	I0729 01:18:56.491228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.085µs"
	I0729 01:18:58.036200       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.912987ms"
	I0729 01:18:58.036326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41µs"
	I0729 01:19:01.860693       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8ca67b68988769cf0f2629127e88cb8e28d64711c5477e47cbd0260940c95451] <==
	I0729 01:15:20.675213       1 server_linux.go:69] "Using iptables proxy"
	E0729 01:15:23.530690       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-845088\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 01:15:26.603456       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-845088\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 01:15:29.675426       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-845088\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 01:15:35.819993       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-845088\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 01:15:48.107276       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-845088\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0729 01:16:04.529528       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.69"]
	I0729 01:16:04.620328       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 01:16:04.620401       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 01:16:04.620425       1 server_linux.go:165] "Using iptables Proxier"
	I0729 01:16:04.630104       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 01:16:04.630718       1 server.go:872] "Version info" version="v1.30.3"
	I0729 01:16:04.630844       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:16:04.634484       1 config.go:192] "Starting service config controller"
	I0729 01:16:04.634653       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 01:16:04.634712       1 config.go:101] "Starting endpoint slice config controller"
	I0729 01:16:04.634741       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 01:16:04.635288       1 config.go:319] "Starting node config controller"
	I0729 01:16:04.635342       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 01:16:04.736199       1 shared_informer.go:320] Caches are synced for node config
	I0729 01:16:04.736258       1 shared_informer.go:320] Caches are synced for service config
	I0729 01:16:04.736291       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ba58523a71dfbc6efc2df74bc80c80d691014793d9b88e6593d469801095d2a8] <==
	E0729 01:12:37.645168       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:40.715755       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1901": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:40.715857       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1901": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:40.716110       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:40.716215       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:40.716311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-845088&resourceVersion=1971": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:40.716368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-845088&resourceVersion=1971": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:46.860211       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-845088&resourceVersion=1971": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:46.860274       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-845088&resourceVersion=1971": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:46.860361       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1901": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:46.860395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1901": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:49.932271       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:49.932760       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:56.075906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-845088&resourceVersion=1971": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:56.076313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-845088&resourceVersion=1971": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:59.147815       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:59.148052       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:12:59.148306       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1901": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:12:59.148402       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1901": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:13:14.508328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-845088&resourceVersion=1971": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:13:14.508447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-845088&resourceVersion=1971": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:13:17.579066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1901": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:13:17.579317       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1901": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 01:13:26.794600       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 01:13:26.794777       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [416edea5a4ef1589f0f30884e6c1c4c26063ba7ccc13ee4f90d22801464de2ca] <==
	W0729 01:15:58.211285       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.69:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	E0729 01:15:58.211356       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.69:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	W0729 01:15:58.272879       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.69:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	E0729 01:15:58.272955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.69:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	W0729 01:15:58.509745       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.69:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	E0729 01:15:58.509808       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.69:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	W0729 01:15:58.805710       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.69:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	E0729 01:15:58.805828       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.69:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	W0729 01:15:59.070229       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.69:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	E0729 01:15:59.070285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.69:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	W0729 01:15:59.520990       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.69:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	E0729 01:15:59.521171       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.69:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.69:8443: connect: connection refused
	W0729 01:16:01.971363       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 01:16:01.971412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 01:16:01.971495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 01:16:01.971525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 01:16:01.971572       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 01:16:01.971608       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 01:16:01.971825       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 01:16:01.971956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0729 01:16:13.475175       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 01:17:49.994276       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-85fmb\": pod busybox-fc5497c4f-85fmb is already assigned to node \"ha-845088-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-85fmb" node="ha-845088-m04"
	E0729 01:17:49.995645       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 76a05605-feb6-4826-af9a-f4bdc637b084(default/busybox-fc5497c4f-85fmb) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-85fmb"
	E0729 01:17:49.995866       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-85fmb\": pod busybox-fc5497c4f-85fmb is already assigned to node \"ha-845088-m04\"" pod="default/busybox-fc5497c4f-85fmb"
	I0729 01:17:49.996071       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-85fmb" node="ha-845088-m04"
	
	
	==> kube-scheduler [71cb29192a2ffc140cfde54b5d38a513e16b25b36b29d762ae02aaac663e9d60] <==
	W0729 01:13:33.533113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 01:13:33.533215       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 01:13:33.721255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 01:13:33.721302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 01:13:33.841816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 01:13:33.841882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 01:13:34.184983       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 01:13:34.185075       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 01:13:35.022484       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 01:13:35.022572       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 01:13:35.339307       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 01:13:35.339364       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 01:13:35.573977       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 01:13:35.574080       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 01:13:35.854281       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 01:13:35.854369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 01:13:41.004073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 01:13:41.004179       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 01:13:41.409342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 01:13:41.409373       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 01:13:41.603564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 01:13:41.603603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 01:13:41.979457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 01:13:41.979504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 01:13:42.070471       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 01:18:45 ha-845088 kubelet[1372]: E0729 01:18:45.805951    1372 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-845088\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-845088?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 29 01:18:49 ha-845088 kubelet[1372]: E0729 01:18:49.375160    1372 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-845088?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 29 01:18:53 ha-845088 kubelet[1372]: W0729 01:18:53.807896    1372 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 29 01:18:53 ha-845088 kubelet[1372]: W0729 01:18:53.808097    1372 reflector.go:470] object-"kube-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 29 01:18:53 ha-845088 kubelet[1372]: W0729 01:18:53.808189    1372 reflector.go:470] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 29 01:18:53 ha-845088 kubelet[1372]: W0729 01:18:53.808218    1372 reflector.go:470] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 29 01:18:53 ha-845088 kubelet[1372]: W0729 01:18:53.808248    1372 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 29 01:18:53 ha-845088 kubelet[1372]: W0729 01:18:53.808282    1372 reflector.go:470] object-"default"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 29 01:18:53 ha-845088 kubelet[1372]: W0729 01:18:53.808308    1372 reflector.go:470] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 29 01:18:53 ha-845088 kubelet[1372]: W0729 01:18:53.807896    1372 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 29 01:18:53 ha-845088 kubelet[1372]: W0729 01:18:53.807959    1372 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 29 01:18:53 ha-845088 kubelet[1372]: E0729 01:18:53.808416    1372 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-845088?timeout=10s\": http2: client connection lost"
	Jul 29 01:18:53 ha-845088 kubelet[1372]: I0729 01:18:53.808460    1372 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Jul 29 01:18:53 ha-845088 kubelet[1372]: E0729 01:18:53.808835    1372 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-845088\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-845088?timeout=10s\": http2: client connection lost"
	Jul 29 01:18:53 ha-845088 kubelet[1372]: E0729 01:18:53.808907    1372 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 29 01:18:56 ha-845088 kubelet[1372]: E0729 01:18:56.147614    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:18:56 ha-845088 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:18:56 ha-845088 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:18:56 ha-845088 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:18:56 ha-845088 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:19:56 ha-845088 kubelet[1372]: E0729 01:19:56.142989    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:19:56 ha-845088 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:19:56 ha-845088 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:19:56 ha-845088 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:19:56 ha-845088 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 01:20:27.542800   36678 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-9421/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-845088 -n ha-845088
helpers_test.go:261: (dbg) Run:  kubectl --context ha-845088 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (334.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-060411
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-060411
E0729 01:36:27.216066   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-060411: exit status 82 (2m1.916809449s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-060411-m03"  ...
	* Stopping node "multinode-060411-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-060411" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-060411 --wait=true -v=8 --alsologtostderr
E0729 01:37:23.073464   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
E0729 01:40:26.116481   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-060411 --wait=true -v=8 --alsologtostderr: (3m30.117152743s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-060411
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-060411 -n multinode-060411
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-060411 logs -n 25: (1.545590385s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-060411 ssh -n                                                                 | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-060411 cp multinode-060411-m02:/home/docker/cp-test.txt                       | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile705326141/001/cp-test_multinode-060411-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n                                                                 | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-060411 cp multinode-060411-m02:/home/docker/cp-test.txt                       | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411:/home/docker/cp-test_multinode-060411-m02_multinode-060411.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n                                                                 | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n multinode-060411 sudo cat                                       | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | /home/docker/cp-test_multinode-060411-m02_multinode-060411.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-060411 cp multinode-060411-m02:/home/docker/cp-test.txt                       | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m03:/home/docker/cp-test_multinode-060411-m02_multinode-060411-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n                                                                 | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n multinode-060411-m03 sudo cat                                   | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | /home/docker/cp-test_multinode-060411-m02_multinode-060411-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-060411 cp testdata/cp-test.txt                                                | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n                                                                 | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-060411 cp multinode-060411-m03:/home/docker/cp-test.txt                       | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile705326141/001/cp-test_multinode-060411-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n                                                                 | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-060411 cp multinode-060411-m03:/home/docker/cp-test.txt                       | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411:/home/docker/cp-test_multinode-060411-m03_multinode-060411.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n                                                                 | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n multinode-060411 sudo cat                                       | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | /home/docker/cp-test_multinode-060411-m03_multinode-060411.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-060411 cp multinode-060411-m03:/home/docker/cp-test.txt                       | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m02:/home/docker/cp-test_multinode-060411-m03_multinode-060411-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n                                                                 | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n multinode-060411-m02 sudo cat                                   | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | /home/docker/cp-test_multinode-060411-m03_multinode-060411-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-060411 node stop m03                                                          | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	| node    | multinode-060411 node start                                                             | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:35 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-060411                                                                | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:35 UTC |                     |
	| stop    | -p multinode-060411                                                                     | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:35 UTC |                     |
	| start   | -p multinode-060411                                                                     | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:37 UTC | 29 Jul 24 01:40 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-060411                                                                | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:40 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 01:37:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 01:37:09.455841   45889 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:37:09.455977   45889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:37:09.455990   45889 out.go:304] Setting ErrFile to fd 2...
	I0729 01:37:09.455996   45889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:37:09.456188   45889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:37:09.456750   45889 out.go:298] Setting JSON to false
	I0729 01:37:09.457704   45889 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4775,"bootTime":1722212254,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 01:37:09.457760   45889 start.go:139] virtualization: kvm guest
	I0729 01:37:09.460229   45889 out.go:177] * [multinode-060411] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 01:37:09.461579   45889 notify.go:220] Checking for updates...
	I0729 01:37:09.461629   45889 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 01:37:09.463021   45889 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 01:37:09.464587   45889 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:37:09.466004   45889 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:37:09.467358   45889 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 01:37:09.468611   45889 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 01:37:09.470510   45889 config.go:182] Loaded profile config "multinode-060411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:37:09.470606   45889 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 01:37:09.471026   45889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:37:09.471121   45889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:37:09.487190   45889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I0729 01:37:09.487588   45889 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:37:09.488118   45889 main.go:141] libmachine: Using API Version  1
	I0729 01:37:09.488148   45889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:37:09.488487   45889 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:37:09.488675   45889 main.go:141] libmachine: (multinode-060411) Calling .DriverName
	I0729 01:37:09.524968   45889 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 01:37:09.526329   45889 start.go:297] selected driver: kvm2
	I0729 01:37:09.526341   45889 start.go:901] validating driver "kvm2" against &{Name:multinode-060411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-060411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.190 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:37:09.526480   45889 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 01:37:09.526815   45889 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:37:09.526881   45889 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 01:37:09.542086   45889 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 01:37:09.542748   45889 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 01:37:09.542823   45889 cni.go:84] Creating CNI manager for ""
	I0729 01:37:09.542837   45889 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 01:37:09.542913   45889 start.go:340] cluster config:
	{Name:multinode-060411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-060411 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.190 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:37:09.543095   45889 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:37:09.544981   45889 out.go:177] * Starting "multinode-060411" primary control-plane node in "multinode-060411" cluster
	I0729 01:37:09.546322   45889 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:37:09.546358   45889 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 01:37:09.546365   45889 cache.go:56] Caching tarball of preloaded images
	I0729 01:37:09.546438   45889 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 01:37:09.546447   45889 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 01:37:09.546559   45889 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/config.json ...
	I0729 01:37:09.546744   45889 start.go:360] acquireMachinesLock for multinode-060411: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 01:37:09.546785   45889 start.go:364] duration metric: took 23.53µs to acquireMachinesLock for "multinode-060411"
	I0729 01:37:09.546796   45889 start.go:96] Skipping create...Using existing machine configuration
	I0729 01:37:09.546801   45889 fix.go:54] fixHost starting: 
	I0729 01:37:09.547043   45889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:37:09.547099   45889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:37:09.561568   45889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38557
	I0729 01:37:09.562011   45889 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:37:09.562527   45889 main.go:141] libmachine: Using API Version  1
	I0729 01:37:09.562550   45889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:37:09.562891   45889 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:37:09.563118   45889 main.go:141] libmachine: (multinode-060411) Calling .DriverName
	I0729 01:37:09.563322   45889 main.go:141] libmachine: (multinode-060411) Calling .GetState
	I0729 01:37:09.564986   45889 fix.go:112] recreateIfNeeded on multinode-060411: state=Running err=<nil>
	W0729 01:37:09.565011   45889 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 01:37:09.567008   45889 out.go:177] * Updating the running kvm2 "multinode-060411" VM ...
	I0729 01:37:09.568492   45889 machine.go:94] provisionDockerMachine start ...
	I0729 01:37:09.568518   45889 main.go:141] libmachine: (multinode-060411) Calling .DriverName
	I0729 01:37:09.568758   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:37:09.571515   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.571915   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:37:09.571936   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.572080   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:37:09.572246   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:09.572387   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:09.572552   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:37:09.572705   45889 main.go:141] libmachine: Using SSH client type: native
	I0729 01:37:09.572946   45889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 01:37:09.572962   45889 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 01:37:09.680701   45889 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-060411
	
	I0729 01:37:09.680735   45889 main.go:141] libmachine: (multinode-060411) Calling .GetMachineName
	I0729 01:37:09.680996   45889 buildroot.go:166] provisioning hostname "multinode-060411"
	I0729 01:37:09.681028   45889 main.go:141] libmachine: (multinode-060411) Calling .GetMachineName
	I0729 01:37:09.681211   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:37:09.683887   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.684218   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:37:09.684244   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.684399   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:37:09.684590   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:09.684737   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:09.684901   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:37:09.685057   45889 main.go:141] libmachine: Using SSH client type: native
	I0729 01:37:09.685250   45889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 01:37:09.685267   45889 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-060411 && echo "multinode-060411" | sudo tee /etc/hostname
	I0729 01:37:09.810051   45889 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-060411
	
	I0729 01:37:09.810081   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:37:09.813215   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.813652   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:37:09.813683   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.813828   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:37:09.814012   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:09.814180   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:09.814305   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:37:09.814455   45889 main.go:141] libmachine: Using SSH client type: native
	I0729 01:37:09.814615   45889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 01:37:09.814632   45889 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-060411' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-060411/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-060411' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 01:37:09.915940   45889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:37:09.915972   45889 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 01:37:09.916057   45889 buildroot.go:174] setting up certificates
	I0729 01:37:09.916067   45889 provision.go:84] configureAuth start
	I0729 01:37:09.916078   45889 main.go:141] libmachine: (multinode-060411) Calling .GetMachineName
	I0729 01:37:09.916330   45889 main.go:141] libmachine: (multinode-060411) Calling .GetIP
	I0729 01:37:09.919112   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.919481   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:37:09.919511   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.919629   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:37:09.921890   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.922322   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:37:09.922362   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.922510   45889 provision.go:143] copyHostCerts
	I0729 01:37:09.922540   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:37:09.922581   45889 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 01:37:09.922596   45889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:37:09.922680   45889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 01:37:09.922776   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:37:09.922800   45889 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 01:37:09.922807   45889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:37:09.922835   45889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 01:37:09.922920   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:37:09.922939   45889 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 01:37:09.922945   45889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:37:09.922967   45889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 01:37:09.923012   45889 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.multinode-060411 san=[127.0.0.1 192.168.39.140 localhost minikube multinode-060411]
	I0729 01:37:10.227610   45889 provision.go:177] copyRemoteCerts
	I0729 01:37:10.227665   45889 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 01:37:10.227688   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:37:10.230757   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:10.231165   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:37:10.231192   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:10.231374   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:37:10.231578   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:10.231716   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:37:10.231813   45889 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/multinode-060411/id_rsa Username:docker}
	I0729 01:37:10.314357   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 01:37:10.314445   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 01:37:10.339500   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 01:37:10.339574   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 01:37:10.366644   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 01:37:10.366769   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 01:37:10.393783   45889 provision.go:87] duration metric: took 477.703082ms to configureAuth
	I0729 01:37:10.393813   45889 buildroot.go:189] setting minikube options for container-runtime
	I0729 01:37:10.394040   45889 config.go:182] Loaded profile config "multinode-060411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:37:10.394112   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:37:10.397088   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:10.397539   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:37:10.397572   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:10.397752   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:37:10.397919   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:10.398068   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:10.398182   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:37:10.398336   45889 main.go:141] libmachine: Using SSH client type: native
	I0729 01:37:10.398498   45889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 01:37:10.398514   45889 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 01:38:41.113529   45889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 01:38:41.113557   45889 machine.go:97] duration metric: took 1m31.545048794s to provisionDockerMachine
	I0729 01:38:41.113568   45889 start.go:293] postStartSetup for "multinode-060411" (driver="kvm2")
	I0729 01:38:41.113578   45889 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 01:38:41.113593   45889 main.go:141] libmachine: (multinode-060411) Calling .DriverName
	I0729 01:38:41.113896   45889 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 01:38:41.113930   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:38:41.117048   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.117575   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:38:41.117604   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.117756   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:38:41.117978   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:38:41.118168   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:38:41.118299   45889 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/multinode-060411/id_rsa Username:docker}
	I0729 01:38:41.198468   45889 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 01:38:41.203071   45889 command_runner.go:130] > NAME=Buildroot
	I0729 01:38:41.203102   45889 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 01:38:41.203110   45889 command_runner.go:130] > ID=buildroot
	I0729 01:38:41.203117   45889 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 01:38:41.203125   45889 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 01:38:41.203163   45889 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 01:38:41.203179   45889 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 01:38:41.203238   45889 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 01:38:41.203315   45889 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 01:38:41.203327   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /etc/ssl/certs/166232.pem
	I0729 01:38:41.203410   45889 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 01:38:41.213174   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:38:41.238596   45889 start.go:296] duration metric: took 125.014807ms for postStartSetup
	I0729 01:38:41.238637   45889 fix.go:56] duration metric: took 1m31.691836489s for fixHost
	I0729 01:38:41.238656   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:38:41.241479   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.241940   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:38:41.241986   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.242178   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:38:41.242385   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:38:41.242620   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:38:41.242750   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:38:41.242904   45889 main.go:141] libmachine: Using SSH client type: native
	I0729 01:38:41.243073   45889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 01:38:41.243085   45889 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 01:38:41.343764   45889 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722217121.314532067
	
	I0729 01:38:41.343789   45889 fix.go:216] guest clock: 1722217121.314532067
	I0729 01:38:41.343797   45889 fix.go:229] Guest: 2024-07-29 01:38:41.314532067 +0000 UTC Remote: 2024-07-29 01:38:41.238641824 +0000 UTC m=+91.817243193 (delta=75.890243ms)
	I0729 01:38:41.343815   45889 fix.go:200] guest clock delta is within tolerance: 75.890243ms
	I0729 01:38:41.343820   45889 start.go:83] releasing machines lock for "multinode-060411", held for 1m31.797028617s
	I0729 01:38:41.343837   45889 main.go:141] libmachine: (multinode-060411) Calling .DriverName
	I0729 01:38:41.344084   45889 main.go:141] libmachine: (multinode-060411) Calling .GetIP
	I0729 01:38:41.346830   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.347273   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:38:41.347296   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.347519   45889 main.go:141] libmachine: (multinode-060411) Calling .DriverName
	I0729 01:38:41.348069   45889 main.go:141] libmachine: (multinode-060411) Calling .DriverName
	I0729 01:38:41.348241   45889 main.go:141] libmachine: (multinode-060411) Calling .DriverName
	I0729 01:38:41.348325   45889 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 01:38:41.348375   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:38:41.348473   45889 ssh_runner.go:195] Run: cat /version.json
	I0729 01:38:41.348492   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:38:41.351261   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.351596   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:38:41.351628   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.351656   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.351806   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:38:41.351983   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:38:41.352054   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:38:41.352077   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.352116   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:38:41.352273   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:38:41.352271   45889 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/multinode-060411/id_rsa Username:docker}
	I0729 01:38:41.352404   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:38:41.352568   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:38:41.352708   45889 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/multinode-060411/id_rsa Username:docker}
	I0729 01:38:41.445766   45889 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 01:38:41.445805   45889 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0729 01:38:41.445955   45889 ssh_runner.go:195] Run: systemctl --version
	I0729 01:38:41.452175   45889 command_runner.go:130] > systemd 252 (252)
	I0729 01:38:41.452217   45889 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 01:38:41.452285   45889 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 01:38:41.613304   45889 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 01:38:41.620128   45889 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 01:38:41.620444   45889 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 01:38:41.620510   45889 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 01:38:41.629856   45889 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 01:38:41.629879   45889 start.go:495] detecting cgroup driver to use...
	I0729 01:38:41.629934   45889 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 01:38:41.645649   45889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 01:38:41.660060   45889 docker.go:217] disabling cri-docker service (if available) ...
	I0729 01:38:41.660114   45889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 01:38:41.673820   45889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 01:38:41.687348   45889 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 01:38:41.833287   45889 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 01:38:41.979000   45889 docker.go:233] disabling docker service ...
	I0729 01:38:41.979085   45889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 01:38:41.996524   45889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 01:38:42.010071   45889 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 01:38:42.153527   45889 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 01:38:42.294213   45889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 01:38:42.308590   45889 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 01:38:42.327898   45889 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 01:38:42.327943   45889 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 01:38:42.327999   45889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:38:42.339018   45889 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 01:38:42.339116   45889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:38:42.350345   45889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:38:42.361804   45889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:38:42.373148   45889 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 01:38:42.384588   45889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:38:42.397188   45889 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:38:42.408389   45889 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:38:42.420377   45889 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 01:38:42.431699   45889 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 01:38:42.431760   45889 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 01:38:42.443000   45889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:38:42.616768   45889 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 01:38:50.152688   45889 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.535884066s)
	I0729 01:38:50.152720   45889 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 01:38:50.152777   45889 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 01:38:50.157741   45889 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 01:38:50.157771   45889 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 01:38:50.157785   45889 command_runner.go:130] > Device: 0,22	Inode: 1331        Links: 1
	I0729 01:38:50.157795   45889 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 01:38:50.157803   45889 command_runner.go:130] > Access: 2024-07-29 01:38:50.013635142 +0000
	I0729 01:38:50.157811   45889 command_runner.go:130] > Modify: 2024-07-29 01:38:50.013635142 +0000
	I0729 01:38:50.157819   45889 command_runner.go:130] > Change: 2024-07-29 01:38:50.013635142 +0000
	I0729 01:38:50.157824   45889 command_runner.go:130] >  Birth: -
	I0729 01:38:50.157885   45889 start.go:563] Will wait 60s for crictl version
	I0729 01:38:50.157942   45889 ssh_runner.go:195] Run: which crictl
	I0729 01:38:50.162268   45889 command_runner.go:130] > /usr/bin/crictl
	I0729 01:38:50.162342   45889 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 01:38:50.200730   45889 command_runner.go:130] > Version:  0.1.0
	I0729 01:38:50.200752   45889 command_runner.go:130] > RuntimeName:  cri-o
	I0729 01:38:50.200759   45889 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 01:38:50.200764   45889 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 01:38:50.202828   45889 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 01:38:50.202892   45889 ssh_runner.go:195] Run: crio --version
	I0729 01:38:50.230166   45889 command_runner.go:130] > crio version 1.29.1
	I0729 01:38:50.230192   45889 command_runner.go:130] > Version:        1.29.1
	I0729 01:38:50.230237   45889 command_runner.go:130] > GitCommit:      unknown
	I0729 01:38:50.230246   45889 command_runner.go:130] > GitCommitDate:  unknown
	I0729 01:38:50.230253   45889 command_runner.go:130] > GitTreeState:   clean
	I0729 01:38:50.230265   45889 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 01:38:50.230279   45889 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 01:38:50.230288   45889 command_runner.go:130] > Compiler:       gc
	I0729 01:38:50.230298   45889 command_runner.go:130] > Platform:       linux/amd64
	I0729 01:38:50.230304   45889 command_runner.go:130] > Linkmode:       dynamic
	I0729 01:38:50.230315   45889 command_runner.go:130] > BuildTags:      
	I0729 01:38:50.230322   45889 command_runner.go:130] >   containers_image_ostree_stub
	I0729 01:38:50.230329   45889 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 01:38:50.230336   45889 command_runner.go:130] >   btrfs_noversion
	I0729 01:38:50.230344   45889 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 01:38:50.230351   45889 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 01:38:50.230359   45889 command_runner.go:130] >   seccomp
	I0729 01:38:50.230365   45889 command_runner.go:130] > LDFlags:          unknown
	I0729 01:38:50.230372   45889 command_runner.go:130] > SeccompEnabled:   true
	I0729 01:38:50.230379   45889 command_runner.go:130] > AppArmorEnabled:  false
	I0729 01:38:50.231530   45889 ssh_runner.go:195] Run: crio --version
	I0729 01:38:50.262075   45889 command_runner.go:130] > crio version 1.29.1
	I0729 01:38:50.262102   45889 command_runner.go:130] > Version:        1.29.1
	I0729 01:38:50.262109   45889 command_runner.go:130] > GitCommit:      unknown
	I0729 01:38:50.262113   45889 command_runner.go:130] > GitCommitDate:  unknown
	I0729 01:38:50.262118   45889 command_runner.go:130] > GitTreeState:   clean
	I0729 01:38:50.262124   45889 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 01:38:50.262129   45889 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 01:38:50.262133   45889 command_runner.go:130] > Compiler:       gc
	I0729 01:38:50.262137   45889 command_runner.go:130] > Platform:       linux/amd64
	I0729 01:38:50.262141   45889 command_runner.go:130] > Linkmode:       dynamic
	I0729 01:38:50.262148   45889 command_runner.go:130] > BuildTags:      
	I0729 01:38:50.262152   45889 command_runner.go:130] >   containers_image_ostree_stub
	I0729 01:38:50.262156   45889 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 01:38:50.262159   45889 command_runner.go:130] >   btrfs_noversion
	I0729 01:38:50.262164   45889 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 01:38:50.262168   45889 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 01:38:50.262172   45889 command_runner.go:130] >   seccomp
	I0729 01:38:50.262177   45889 command_runner.go:130] > LDFlags:          unknown
	I0729 01:38:50.262184   45889 command_runner.go:130] > SeccompEnabled:   true
	I0729 01:38:50.262188   45889 command_runner.go:130] > AppArmorEnabled:  false
	I0729 01:38:50.264001   45889 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 01:38:50.265381   45889 main.go:141] libmachine: (multinode-060411) Calling .GetIP
	I0729 01:38:50.268240   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:50.268640   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:38:50.268663   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:50.268816   45889 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 01:38:50.273037   45889 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 01:38:50.273127   45889 kubeadm.go:883] updating cluster {Name:multinode-060411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-060411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.190 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 01:38:50.273300   45889 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:38:50.273356   45889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:38:50.317638   45889 command_runner.go:130] > {
	I0729 01:38:50.317656   45889 command_runner.go:130] >   "images": [
	I0729 01:38:50.317660   45889 command_runner.go:130] >     {
	I0729 01:38:50.317668   45889 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 01:38:50.317673   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.317678   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 01:38:50.317682   45889 command_runner.go:130] >       ],
	I0729 01:38:50.317686   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.317694   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 01:38:50.317700   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 01:38:50.317704   45889 command_runner.go:130] >       ],
	I0729 01:38:50.317710   45889 command_runner.go:130] >       "size": "87165492",
	I0729 01:38:50.317716   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.317721   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.317729   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.317738   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.317742   45889 command_runner.go:130] >     },
	I0729 01:38:50.317748   45889 command_runner.go:130] >     {
	I0729 01:38:50.317760   45889 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 01:38:50.317764   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.317769   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 01:38:50.317775   45889 command_runner.go:130] >       ],
	I0729 01:38:50.317779   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.317788   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 01:38:50.317796   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 01:38:50.317802   45889 command_runner.go:130] >       ],
	I0729 01:38:50.317806   45889 command_runner.go:130] >       "size": "87174707",
	I0729 01:38:50.317812   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.317822   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.317833   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.317843   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.317851   45889 command_runner.go:130] >     },
	I0729 01:38:50.317858   45889 command_runner.go:130] >     {
	I0729 01:38:50.317865   45889 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 01:38:50.317871   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.317876   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 01:38:50.317882   45889 command_runner.go:130] >       ],
	I0729 01:38:50.317886   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.317895   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 01:38:50.317906   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 01:38:50.317915   45889 command_runner.go:130] >       ],
	I0729 01:38:50.317929   45889 command_runner.go:130] >       "size": "1363676",
	I0729 01:38:50.317939   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.317948   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.317958   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.317965   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.317969   45889 command_runner.go:130] >     },
	I0729 01:38:50.317975   45889 command_runner.go:130] >     {
	I0729 01:38:50.317981   45889 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 01:38:50.317987   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.317992   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 01:38:50.317998   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318002   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.318016   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 01:38:50.318035   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 01:38:50.318044   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318053   45889 command_runner.go:130] >       "size": "31470524",
	I0729 01:38:50.318062   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.318072   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.318082   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.318089   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.318093   45889 command_runner.go:130] >     },
	I0729 01:38:50.318107   45889 command_runner.go:130] >     {
	I0729 01:38:50.318119   45889 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 01:38:50.318129   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.318138   45889 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 01:38:50.318147   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318157   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.318172   45889 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 01:38:50.318186   45889 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 01:38:50.318194   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318200   45889 command_runner.go:130] >       "size": "61245718",
	I0729 01:38:50.318206   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.318212   45889 command_runner.go:130] >       "username": "nonroot",
	I0729 01:38:50.318223   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.318232   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.318238   45889 command_runner.go:130] >     },
	I0729 01:38:50.318243   45889 command_runner.go:130] >     {
	I0729 01:38:50.318254   45889 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 01:38:50.318264   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.318274   45889 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 01:38:50.318281   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318287   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.318297   45889 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 01:38:50.318312   45889 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 01:38:50.318320   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318327   45889 command_runner.go:130] >       "size": "150779692",
	I0729 01:38:50.318336   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.318346   45889 command_runner.go:130] >         "value": "0"
	I0729 01:38:50.318354   45889 command_runner.go:130] >       },
	I0729 01:38:50.318361   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.318370   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.318377   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.318385   45889 command_runner.go:130] >     },
	I0729 01:38:50.318390   45889 command_runner.go:130] >     {
	I0729 01:38:50.318416   45889 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 01:38:50.318430   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.318438   45889 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 01:38:50.318443   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318454   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.318469   45889 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 01:38:50.318484   45889 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 01:38:50.318494   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318503   45889 command_runner.go:130] >       "size": "117609954",
	I0729 01:38:50.318510   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.318515   45889 command_runner.go:130] >         "value": "0"
	I0729 01:38:50.318523   45889 command_runner.go:130] >       },
	I0729 01:38:50.318532   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.318541   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.318551   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.318560   45889 command_runner.go:130] >     },
	I0729 01:38:50.318565   45889 command_runner.go:130] >     {
	I0729 01:38:50.318578   45889 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 01:38:50.318587   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.318594   45889 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 01:38:50.318599   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318604   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.318649   45889 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 01:38:50.318666   45889 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 01:38:50.318672   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318679   45889 command_runner.go:130] >       "size": "112198984",
	I0729 01:38:50.318689   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.318696   45889 command_runner.go:130] >         "value": "0"
	I0729 01:38:50.318704   45889 command_runner.go:130] >       },
	I0729 01:38:50.318711   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.318717   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.318724   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.318729   45889 command_runner.go:130] >     },
	I0729 01:38:50.318733   45889 command_runner.go:130] >     {
	I0729 01:38:50.318742   45889 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 01:38:50.318748   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.318756   45889 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 01:38:50.318761   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318768   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.318783   45889 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 01:38:50.318794   45889 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 01:38:50.318799   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318807   45889 command_runner.go:130] >       "size": "85953945",
	I0729 01:38:50.318813   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.318819   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.318825   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.318830   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.318833   45889 command_runner.go:130] >     },
	I0729 01:38:50.318836   45889 command_runner.go:130] >     {
	I0729 01:38:50.318846   45889 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 01:38:50.318855   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.318863   45889 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 01:38:50.318871   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318878   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.318890   45889 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 01:38:50.318905   45889 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 01:38:50.318914   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318921   45889 command_runner.go:130] >       "size": "63051080",
	I0729 01:38:50.318932   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.318938   45889 command_runner.go:130] >         "value": "0"
	I0729 01:38:50.318944   45889 command_runner.go:130] >       },
	I0729 01:38:50.318950   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.318956   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.318965   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.318971   45889 command_runner.go:130] >     },
	I0729 01:38:50.318980   45889 command_runner.go:130] >     {
	I0729 01:38:50.318990   45889 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 01:38:50.318999   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.319008   45889 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 01:38:50.319013   45889 command_runner.go:130] >       ],
	I0729 01:38:50.319021   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.319034   45889 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 01:38:50.319045   45889 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 01:38:50.319053   45889 command_runner.go:130] >       ],
	I0729 01:38:50.319071   45889 command_runner.go:130] >       "size": "750414",
	I0729 01:38:50.319080   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.319087   45889 command_runner.go:130] >         "value": "65535"
	I0729 01:38:50.319095   45889 command_runner.go:130] >       },
	I0729 01:38:50.319108   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.319116   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.319123   45889 command_runner.go:130] >       "pinned": true
	I0729 01:38:50.319130   45889 command_runner.go:130] >     }
	I0729 01:38:50.319133   45889 command_runner.go:130] >   ]
	I0729 01:38:50.319138   45889 command_runner.go:130] > }
	I0729 01:38:50.319330   45889 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 01:38:50.319343   45889 crio.go:433] Images already preloaded, skipping extraction
	I0729 01:38:50.319395   45889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:38:50.354230   45889 command_runner.go:130] > {
	I0729 01:38:50.354251   45889 command_runner.go:130] >   "images": [
	I0729 01:38:50.354255   45889 command_runner.go:130] >     {
	I0729 01:38:50.354263   45889 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 01:38:50.354268   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.354276   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 01:38:50.354281   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354288   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.354301   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 01:38:50.354313   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 01:38:50.354318   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354325   45889 command_runner.go:130] >       "size": "87165492",
	I0729 01:38:50.354332   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.354337   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.354349   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.354356   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.354362   45889 command_runner.go:130] >     },
	I0729 01:38:50.354367   45889 command_runner.go:130] >     {
	I0729 01:38:50.354378   45889 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 01:38:50.354385   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.354394   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 01:38:50.354401   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354408   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.354419   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 01:38:50.354431   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 01:38:50.354438   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354444   45889 command_runner.go:130] >       "size": "87174707",
	I0729 01:38:50.354448   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.354456   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.354463   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.354469   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.354476   45889 command_runner.go:130] >     },
	I0729 01:38:50.354481   45889 command_runner.go:130] >     {
	I0729 01:38:50.354494   45889 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 01:38:50.354503   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.354515   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 01:38:50.354521   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354529   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.354536   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 01:38:50.354550   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 01:38:50.354559   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354569   45889 command_runner.go:130] >       "size": "1363676",
	I0729 01:38:50.354578   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.354587   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.354604   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.354612   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.354616   45889 command_runner.go:130] >     },
	I0729 01:38:50.354622   45889 command_runner.go:130] >     {
	I0729 01:38:50.354631   45889 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 01:38:50.354641   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.354652   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 01:38:50.354660   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354669   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.354683   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 01:38:50.354700   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 01:38:50.354706   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354713   45889 command_runner.go:130] >       "size": "31470524",
	I0729 01:38:50.354723   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.354732   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.354741   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.354750   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.354759   45889 command_runner.go:130] >     },
	I0729 01:38:50.354768   45889 command_runner.go:130] >     {
	I0729 01:38:50.354780   45889 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 01:38:50.354787   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.354793   45889 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 01:38:50.354802   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354812   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.354827   45889 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 01:38:50.354841   45889 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 01:38:50.354849   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354858   45889 command_runner.go:130] >       "size": "61245718",
	I0729 01:38:50.354866   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.354870   45889 command_runner.go:130] >       "username": "nonroot",
	I0729 01:38:50.354873   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.354879   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.354887   45889 command_runner.go:130] >     },
	I0729 01:38:50.354896   45889 command_runner.go:130] >     {
	I0729 01:38:50.354908   45889 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 01:38:50.354918   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.354928   45889 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 01:38:50.354936   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354946   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.354956   45889 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 01:38:50.354970   45889 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 01:38:50.354978   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354988   45889 command_runner.go:130] >       "size": "150779692",
	I0729 01:38:50.354996   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.355003   45889 command_runner.go:130] >         "value": "0"
	I0729 01:38:50.355016   45889 command_runner.go:130] >       },
	I0729 01:38:50.355025   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.355032   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.355037   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.355042   45889 command_runner.go:130] >     },
	I0729 01:38:50.355047   45889 command_runner.go:130] >     {
	I0729 01:38:50.355069   45889 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 01:38:50.355079   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.355094   45889 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 01:38:50.355108   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355117   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.355129   45889 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 01:38:50.355141   45889 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 01:38:50.355150   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355158   45889 command_runner.go:130] >       "size": "117609954",
	I0729 01:38:50.355167   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.355176   45889 command_runner.go:130] >         "value": "0"
	I0729 01:38:50.355185   45889 command_runner.go:130] >       },
	I0729 01:38:50.355194   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.355203   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.355211   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.355217   45889 command_runner.go:130] >     },
	I0729 01:38:50.355221   45889 command_runner.go:130] >     {
	I0729 01:38:50.355235   45889 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 01:38:50.355245   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.355256   45889 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 01:38:50.355264   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355274   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.355295   45889 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 01:38:50.355305   45889 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 01:38:50.355310   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355317   45889 command_runner.go:130] >       "size": "112198984",
	I0729 01:38:50.355326   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.355336   45889 command_runner.go:130] >         "value": "0"
	I0729 01:38:50.355345   45889 command_runner.go:130] >       },
	I0729 01:38:50.355354   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.355363   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.355370   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.355378   45889 command_runner.go:130] >     },
	I0729 01:38:50.355382   45889 command_runner.go:130] >     {
	I0729 01:38:50.355394   45889 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 01:38:50.355403   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.355412   45889 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 01:38:50.355420   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355427   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.355441   45889 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 01:38:50.355459   45889 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 01:38:50.355466   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355470   45889 command_runner.go:130] >       "size": "85953945",
	I0729 01:38:50.355478   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.355488   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.355495   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.355504   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.355513   45889 command_runner.go:130] >     },
	I0729 01:38:50.355521   45889 command_runner.go:130] >     {
	I0729 01:38:50.355534   45889 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 01:38:50.355543   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.355552   45889 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 01:38:50.355558   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355563   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.355578   45889 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 01:38:50.355593   45889 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 01:38:50.355601   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355610   45889 command_runner.go:130] >       "size": "63051080",
	I0729 01:38:50.355619   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.355626   45889 command_runner.go:130] >         "value": "0"
	I0729 01:38:50.355633   45889 command_runner.go:130] >       },
	I0729 01:38:50.355637   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.355643   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.355649   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.355657   45889 command_runner.go:130] >     },
	I0729 01:38:50.355663   45889 command_runner.go:130] >     {
	I0729 01:38:50.355675   45889 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 01:38:50.355684   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.355694   45889 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 01:38:50.355702   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355709   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.355721   45889 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 01:38:50.355732   45889 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 01:38:50.355741   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355747   45889 command_runner.go:130] >       "size": "750414",
	I0729 01:38:50.355758   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.355767   45889 command_runner.go:130] >         "value": "65535"
	I0729 01:38:50.355775   45889 command_runner.go:130] >       },
	I0729 01:38:50.355784   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.355793   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.355801   45889 command_runner.go:130] >       "pinned": true
	I0729 01:38:50.355807   45889 command_runner.go:130] >     }
	I0729 01:38:50.355810   45889 command_runner.go:130] >   ]
	I0729 01:38:50.355818   45889 command_runner.go:130] > }
	I0729 01:38:50.355980   45889 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 01:38:50.355993   45889 cache_images.go:84] Images are preloaded, skipping loading
	I0729 01:38:50.356004   45889 kubeadm.go:934] updating node { 192.168.39.140 8443 v1.30.3 crio true true} ...
	I0729 01:38:50.356133   45889 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-060411 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-060411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 01:38:50.356215   45889 ssh_runner.go:195] Run: crio config
	I0729 01:38:50.397273   45889 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 01:38:50.397296   45889 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 01:38:50.397302   45889 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 01:38:50.397306   45889 command_runner.go:130] > #
	I0729 01:38:50.397312   45889 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 01:38:50.397318   45889 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 01:38:50.397327   45889 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 01:38:50.397353   45889 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 01:38:50.397360   45889 command_runner.go:130] > # reload'.
	I0729 01:38:50.397369   45889 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 01:38:50.397382   45889 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 01:38:50.397391   45889 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 01:38:50.397402   45889 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 01:38:50.397407   45889 command_runner.go:130] > [crio]
	I0729 01:38:50.397416   45889 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 01:38:50.397425   45889 command_runner.go:130] > # containers images, in this directory.
	I0729 01:38:50.397437   45889 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 01:38:50.397457   45889 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 01:38:50.397468   45889 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 01:38:50.397479   45889 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 01:38:50.397650   45889 command_runner.go:130] > # imagestore = ""
	I0729 01:38:50.397668   45889 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 01:38:50.397674   45889 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 01:38:50.397784   45889 command_runner.go:130] > storage_driver = "overlay"
	I0729 01:38:50.397796   45889 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 01:38:50.397805   45889 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 01:38:50.397812   45889 command_runner.go:130] > storage_option = [
	I0729 01:38:50.397982   45889 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 01:38:50.397991   45889 command_runner.go:130] > ]
	I0729 01:38:50.397997   45889 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 01:38:50.398013   45889 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 01:38:50.398206   45889 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 01:38:50.398221   45889 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 01:38:50.398230   45889 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 01:38:50.398237   45889 command_runner.go:130] > # always happen on a node reboot
	I0729 01:38:50.398520   45889 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 01:38:50.398535   45889 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 01:38:50.398541   45889 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 01:38:50.398548   45889 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 01:38:50.398653   45889 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 01:38:50.398670   45889 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 01:38:50.398683   45889 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 01:38:50.398915   45889 command_runner.go:130] > # internal_wipe = true
	I0729 01:38:50.398927   45889 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 01:38:50.398933   45889 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 01:38:50.399142   45889 command_runner.go:130] > # internal_repair = false
	I0729 01:38:50.399157   45889 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 01:38:50.399167   45889 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 01:38:50.399178   45889 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 01:38:50.399380   45889 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 01:38:50.399394   45889 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 01:38:50.399398   45889 command_runner.go:130] > [crio.api]
	I0729 01:38:50.399406   45889 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 01:38:50.399668   45889 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 01:38:50.399683   45889 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 01:38:50.399937   45889 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 01:38:50.399953   45889 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 01:38:50.399961   45889 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 01:38:50.400273   45889 command_runner.go:130] > # stream_port = "0"
	I0729 01:38:50.400288   45889 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 01:38:50.400296   45889 command_runner.go:130] > # stream_enable_tls = false
	I0729 01:38:50.400306   45889 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 01:38:50.400372   45889 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 01:38:50.400384   45889 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 01:38:50.400393   45889 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 01:38:50.400399   45889 command_runner.go:130] > # minutes.
	I0729 01:38:50.400407   45889 command_runner.go:130] > # stream_tls_cert = ""
	I0729 01:38:50.400424   45889 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 01:38:50.400437   45889 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 01:38:50.400445   45889 command_runner.go:130] > # stream_tls_key = ""
	I0729 01:38:50.400457   45889 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 01:38:50.400467   45889 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 01:38:50.400490   45889 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 01:38:50.400505   45889 command_runner.go:130] > # stream_tls_ca = ""
	I0729 01:38:50.400520   45889 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 01:38:50.400532   45889 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 01:38:50.400543   45889 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 01:38:50.400553   45889 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 01:38:50.400566   45889 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 01:38:50.400577   45889 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 01:38:50.400586   45889 command_runner.go:130] > [crio.runtime]
	I0729 01:38:50.400597   45889 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 01:38:50.400608   45889 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 01:38:50.400614   45889 command_runner.go:130] > # "nofile=1024:2048"
	I0729 01:38:50.400627   45889 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 01:38:50.400637   45889 command_runner.go:130] > # default_ulimits = [
	I0729 01:38:50.400644   45889 command_runner.go:130] > # ]
	I0729 01:38:50.400655   45889 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 01:38:50.400665   45889 command_runner.go:130] > # no_pivot = false
	I0729 01:38:50.400675   45889 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 01:38:50.400687   45889 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 01:38:50.400697   45889 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 01:38:50.400708   45889 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 01:38:50.400716   45889 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 01:38:50.400729   45889 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 01:38:50.400740   45889 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 01:38:50.400751   45889 command_runner.go:130] > # Cgroup setting for conmon
	I0729 01:38:50.400763   45889 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 01:38:50.400773   45889 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 01:38:50.400783   45889 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 01:38:50.400793   45889 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 01:38:50.400806   45889 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 01:38:50.400813   45889 command_runner.go:130] > conmon_env = [
	I0729 01:38:50.400822   45889 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 01:38:50.400833   45889 command_runner.go:130] > ]
	I0729 01:38:50.400842   45889 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 01:38:50.400856   45889 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 01:38:50.400866   45889 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 01:38:50.400872   45889 command_runner.go:130] > # default_env = [
	I0729 01:38:50.400884   45889 command_runner.go:130] > # ]
	I0729 01:38:50.400896   45889 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 01:38:50.400914   45889 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 01:38:50.400923   45889 command_runner.go:130] > # selinux = false
	I0729 01:38:50.400932   45889 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 01:38:50.400943   45889 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 01:38:50.400952   45889 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 01:38:50.400960   45889 command_runner.go:130] > # seccomp_profile = ""
	I0729 01:38:50.400973   45889 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 01:38:50.400982   45889 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 01:38:50.400995   45889 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 01:38:50.401005   45889 command_runner.go:130] > # which might increase security.
	I0729 01:38:50.401015   45889 command_runner.go:130] > # This option is currently deprecated,
	I0729 01:38:50.401027   45889 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 01:38:50.401033   45889 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 01:38:50.401045   45889 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 01:38:50.401058   45889 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 01:38:50.401070   45889 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 01:38:50.401083   45889 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 01:38:50.401094   45889 command_runner.go:130] > # This option supports live configuration reload.
	I0729 01:38:50.401101   45889 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 01:38:50.401122   45889 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 01:38:50.401133   45889 command_runner.go:130] > # the cgroup blockio controller.
	I0729 01:38:50.401142   45889 command_runner.go:130] > # blockio_config_file = ""
	I0729 01:38:50.401152   45889 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 01:38:50.401161   45889 command_runner.go:130] > # blockio parameters.
	I0729 01:38:50.401167   45889 command_runner.go:130] > # blockio_reload = false
	I0729 01:38:50.401178   45889 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 01:38:50.401184   45889 command_runner.go:130] > # irqbalance daemon.
	I0729 01:38:50.401189   45889 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 01:38:50.401196   45889 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 01:38:50.401202   45889 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 01:38:50.401209   45889 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 01:38:50.401217   45889 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 01:38:50.401225   45889 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 01:38:50.401231   45889 command_runner.go:130] > # This option supports live configuration reload.
	I0729 01:38:50.401239   45889 command_runner.go:130] > # rdt_config_file = ""
	I0729 01:38:50.401246   45889 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 01:38:50.401252   45889 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 01:38:50.401290   45889 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 01:38:50.401301   45889 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 01:38:50.401310   45889 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 01:38:50.401320   45889 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 01:38:50.401331   45889 command_runner.go:130] > # will be added.
	I0729 01:38:50.401338   45889 command_runner.go:130] > # default_capabilities = [
	I0729 01:38:50.401346   45889 command_runner.go:130] > # 	"CHOWN",
	I0729 01:38:50.401354   45889 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 01:38:50.401366   45889 command_runner.go:130] > # 	"FSETID",
	I0729 01:38:50.401372   45889 command_runner.go:130] > # 	"FOWNER",
	I0729 01:38:50.401380   45889 command_runner.go:130] > # 	"SETGID",
	I0729 01:38:50.401385   45889 command_runner.go:130] > # 	"SETUID",
	I0729 01:38:50.401394   45889 command_runner.go:130] > # 	"SETPCAP",
	I0729 01:38:50.401402   45889 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 01:38:50.401412   45889 command_runner.go:130] > # 	"KILL",
	I0729 01:38:50.401417   45889 command_runner.go:130] > # ]
	I0729 01:38:50.401431   45889 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 01:38:50.401444   45889 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 01:38:50.401454   45889 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 01:38:50.401466   45889 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 01:38:50.401478   45889 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 01:38:50.401488   45889 command_runner.go:130] > default_sysctls = [
	I0729 01:38:50.401496   45889 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 01:38:50.401504   45889 command_runner.go:130] > ]
	I0729 01:38:50.401511   45889 command_runner.go:130] > # List of devices on the host that a
	I0729 01:38:50.401524   45889 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 01:38:50.401534   45889 command_runner.go:130] > # allowed_devices = [
	I0729 01:38:50.401543   45889 command_runner.go:130] > # 	"/dev/fuse",
	I0729 01:38:50.401548   45889 command_runner.go:130] > # ]
	I0729 01:38:50.401559   45889 command_runner.go:130] > # List of additional devices. specified as
	I0729 01:38:50.401574   45889 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 01:38:50.401585   45889 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 01:38:50.401596   45889 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 01:38:50.401611   45889 command_runner.go:130] > # additional_devices = [
	I0729 01:38:50.401621   45889 command_runner.go:130] > # ]
	I0729 01:38:50.401629   45889 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 01:38:50.401642   45889 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 01:38:50.401652   45889 command_runner.go:130] > # 	"/etc/cdi",
	I0729 01:38:50.401658   45889 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 01:38:50.401666   45889 command_runner.go:130] > # ]
	I0729 01:38:50.401675   45889 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 01:38:50.401689   45889 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 01:38:50.401699   45889 command_runner.go:130] > # Defaults to false.
	I0729 01:38:50.401709   45889 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 01:38:50.401721   45889 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 01:38:50.401734   45889 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 01:38:50.401744   45889 command_runner.go:130] > # hooks_dir = [
	I0729 01:38:50.401752   45889 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 01:38:50.401760   45889 command_runner.go:130] > # ]
	I0729 01:38:50.401769   45889 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 01:38:50.401781   45889 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 01:38:50.401792   45889 command_runner.go:130] > # its default mounts from the following two files:
	I0729 01:38:50.401802   45889 command_runner.go:130] > #
	I0729 01:38:50.401811   45889 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 01:38:50.401824   45889 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 01:38:50.401835   45889 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 01:38:50.401843   45889 command_runner.go:130] > #
	I0729 01:38:50.401863   45889 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 01:38:50.401877   45889 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 01:38:50.401890   45889 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 01:38:50.401902   45889 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 01:38:50.401910   45889 command_runner.go:130] > #
	I0729 01:38:50.401915   45889 command_runner.go:130] > # default_mounts_file = ""
	I0729 01:38:50.401922   45889 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 01:38:50.401928   45889 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 01:38:50.401934   45889 command_runner.go:130] > pids_limit = 1024
	I0729 01:38:50.401940   45889 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 01:38:50.401946   45889 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 01:38:50.401954   45889 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 01:38:50.401967   45889 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 01:38:50.401972   45889 command_runner.go:130] > # log_size_max = -1
	I0729 01:38:50.401979   45889 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 01:38:50.401985   45889 command_runner.go:130] > # log_to_journald = false
	I0729 01:38:50.401991   45889 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 01:38:50.401997   45889 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 01:38:50.402004   45889 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 01:38:50.402010   45889 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 01:38:50.402016   45889 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 01:38:50.402022   45889 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 01:38:50.402027   45889 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 01:38:50.402031   45889 command_runner.go:130] > # read_only = false
	I0729 01:38:50.402037   45889 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 01:38:50.402043   45889 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 01:38:50.402051   45889 command_runner.go:130] > # live configuration reload.
	I0729 01:38:50.402057   45889 command_runner.go:130] > # log_level = "info"
	I0729 01:38:50.402068   45889 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 01:38:50.402078   45889 command_runner.go:130] > # This option supports live configuration reload.
	I0729 01:38:50.402087   45889 command_runner.go:130] > # log_filter = ""
	I0729 01:38:50.402096   45889 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 01:38:50.402110   45889 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 01:38:50.402120   45889 command_runner.go:130] > # separated by comma.
	I0729 01:38:50.402131   45889 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 01:38:50.402140   45889 command_runner.go:130] > # uid_mappings = ""
	I0729 01:38:50.402152   45889 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 01:38:50.402164   45889 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 01:38:50.402173   45889 command_runner.go:130] > # separated by comma.
	I0729 01:38:50.402185   45889 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 01:38:50.402194   45889 command_runner.go:130] > # gid_mappings = ""
	I0729 01:38:50.402203   45889 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 01:38:50.402212   45889 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 01:38:50.402218   45889 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 01:38:50.402227   45889 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 01:38:50.402231   45889 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 01:38:50.402240   45889 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 01:38:50.402245   45889 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 01:38:50.402265   45889 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 01:38:50.402280   45889 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 01:38:50.402290   45889 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 01:38:50.402299   45889 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 01:38:50.402312   45889 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 01:38:50.402324   45889 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 01:38:50.402336   45889 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 01:38:50.402345   45889 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 01:38:50.402358   45889 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 01:38:50.402369   45889 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 01:38:50.402377   45889 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 01:38:50.402387   45889 command_runner.go:130] > drop_infra_ctr = false
	I0729 01:38:50.402396   45889 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 01:38:50.402407   45889 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 01:38:50.402417   45889 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 01:38:50.402422   45889 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 01:38:50.402429   45889 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 01:38:50.402436   45889 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 01:38:50.402441   45889 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 01:38:50.402446   45889 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 01:38:50.402452   45889 command_runner.go:130] > # shared_cpuset = ""
	I0729 01:38:50.402458   45889 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 01:38:50.402464   45889 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 01:38:50.402468   45889 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 01:38:50.402477   45889 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 01:38:50.402483   45889 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 01:38:50.402488   45889 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 01:38:50.402495   45889 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 01:38:50.402501   45889 command_runner.go:130] > # enable_criu_support = false
	I0729 01:38:50.402512   45889 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 01:38:50.402523   45889 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 01:38:50.402533   45889 command_runner.go:130] > # enable_pod_events = false
	I0729 01:38:50.402545   45889 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 01:38:50.402555   45889 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 01:38:50.402566   45889 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 01:38:50.402572   45889 command_runner.go:130] > # default_runtime = "runc"
	I0729 01:38:50.402585   45889 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 01:38:50.402600   45889 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 01:38:50.402616   45889 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 01:38:50.402627   45889 command_runner.go:130] > # creation as a file is not desired either.
	I0729 01:38:50.402639   45889 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 01:38:50.402650   45889 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 01:38:50.402658   45889 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 01:38:50.402662   45889 command_runner.go:130] > # ]
	I0729 01:38:50.402667   45889 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 01:38:50.402675   45889 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 01:38:50.402682   45889 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 01:38:50.402689   45889 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 01:38:50.402692   45889 command_runner.go:130] > #
	I0729 01:38:50.402697   45889 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 01:38:50.402701   45889 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 01:38:50.402720   45889 command_runner.go:130] > # runtime_type = "oci"
	I0729 01:38:50.402729   45889 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 01:38:50.402736   45889 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 01:38:50.402747   45889 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 01:38:50.402755   45889 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 01:38:50.402763   45889 command_runner.go:130] > # monitor_env = []
	I0729 01:38:50.402772   45889 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 01:38:50.402781   45889 command_runner.go:130] > # allowed_annotations = []
	I0729 01:38:50.402790   45889 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 01:38:50.402797   45889 command_runner.go:130] > # Where:
	I0729 01:38:50.402802   45889 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 01:38:50.402810   45889 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 01:38:50.402815   45889 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 01:38:50.402823   45889 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 01:38:50.402828   45889 command_runner.go:130] > #   in $PATH.
	I0729 01:38:50.402838   45889 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 01:38:50.402852   45889 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 01:38:50.402866   45889 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 01:38:50.402876   45889 command_runner.go:130] > #   state.
	I0729 01:38:50.402887   45889 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 01:38:50.402898   45889 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 01:38:50.402912   45889 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 01:38:50.402923   45889 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 01:38:50.402933   45889 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 01:38:50.402944   45889 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 01:38:50.402954   45889 command_runner.go:130] > #   The currently recognized values are:
	I0729 01:38:50.402964   45889 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 01:38:50.402978   45889 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 01:38:50.402993   45889 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 01:38:50.403005   45889 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 01:38:50.403021   45889 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 01:38:50.403034   45889 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 01:38:50.403043   45889 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 01:38:50.403066   45889 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 01:38:50.403079   45889 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 01:38:50.403093   45889 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 01:38:50.403102   45889 command_runner.go:130] > #   deprecated option "conmon".
	I0729 01:38:50.403113   45889 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 01:38:50.403120   45889 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 01:38:50.403130   45889 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 01:38:50.403141   45889 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 01:38:50.403151   45889 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 01:38:50.403158   45889 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 01:38:50.403168   45889 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 01:38:50.403175   45889 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 01:38:50.403181   45889 command_runner.go:130] > #
	I0729 01:38:50.403187   45889 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 01:38:50.403191   45889 command_runner.go:130] > #
	I0729 01:38:50.403199   45889 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 01:38:50.403208   45889 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 01:38:50.403212   45889 command_runner.go:130] > #
	I0729 01:38:50.403220   45889 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 01:38:50.403229   45889 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 01:38:50.403234   45889 command_runner.go:130] > #
	I0729 01:38:50.403244   45889 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 01:38:50.403251   45889 command_runner.go:130] > # feature.
	I0729 01:38:50.403256   45889 command_runner.go:130] > #
	I0729 01:38:50.403271   45889 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 01:38:50.403285   45889 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 01:38:50.403298   45889 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 01:38:50.403311   45889 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 01:38:50.403323   45889 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 01:38:50.403329   45889 command_runner.go:130] > #
	I0729 01:38:50.403335   45889 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 01:38:50.403344   45889 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 01:38:50.403349   45889 command_runner.go:130] > #
	I0729 01:38:50.403355   45889 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 01:38:50.403362   45889 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 01:38:50.403365   45889 command_runner.go:130] > #
	I0729 01:38:50.403371   45889 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 01:38:50.403378   45889 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 01:38:50.403382   45889 command_runner.go:130] > # limitation.
	I0729 01:38:50.403389   45889 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 01:38:50.403393   45889 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 01:38:50.403399   45889 command_runner.go:130] > runtime_type = "oci"
	I0729 01:38:50.403403   45889 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 01:38:50.403409   45889 command_runner.go:130] > runtime_config_path = ""
	I0729 01:38:50.403414   45889 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 01:38:50.403420   45889 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 01:38:50.403423   45889 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 01:38:50.403429   45889 command_runner.go:130] > monitor_env = [
	I0729 01:38:50.403434   45889 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 01:38:50.403439   45889 command_runner.go:130] > ]
	I0729 01:38:50.403443   45889 command_runner.go:130] > privileged_without_host_devices = false
	I0729 01:38:50.403451   45889 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 01:38:50.403456   45889 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 01:38:50.403464   45889 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 01:38:50.403471   45889 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 01:38:50.403480   45889 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 01:38:50.403486   45889 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 01:38:50.403496   45889 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 01:38:50.403506   45889 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 01:38:50.403514   45889 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 01:38:50.403522   45889 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 01:38:50.403527   45889 command_runner.go:130] > # Example:
	I0729 01:38:50.403532   45889 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 01:38:50.403537   45889 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 01:38:50.403541   45889 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 01:38:50.403545   45889 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 01:38:50.403548   45889 command_runner.go:130] > # cpuset = 0
	I0729 01:38:50.403551   45889 command_runner.go:130] > # cpushares = "0-1"
	I0729 01:38:50.403554   45889 command_runner.go:130] > # Where:
	I0729 01:38:50.403561   45889 command_runner.go:130] > # The workload name is workload-type.
	I0729 01:38:50.403567   45889 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 01:38:50.403572   45889 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 01:38:50.403577   45889 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 01:38:50.403584   45889 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 01:38:50.403589   45889 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 01:38:50.403594   45889 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 01:38:50.403599   45889 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 01:38:50.403603   45889 command_runner.go:130] > # Default value is set to true
	I0729 01:38:50.403607   45889 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 01:38:50.403612   45889 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 01:38:50.403617   45889 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 01:38:50.403621   45889 command_runner.go:130] > # Default value is set to 'false'
	I0729 01:38:50.403624   45889 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 01:38:50.403630   45889 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 01:38:50.403633   45889 command_runner.go:130] > #
	I0729 01:38:50.403638   45889 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 01:38:50.403644   45889 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 01:38:50.403649   45889 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 01:38:50.403655   45889 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 01:38:50.403660   45889 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 01:38:50.403663   45889 command_runner.go:130] > [crio.image]
	I0729 01:38:50.403668   45889 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 01:38:50.403672   45889 command_runner.go:130] > # default_transport = "docker://"
	I0729 01:38:50.403677   45889 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 01:38:50.403682   45889 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 01:38:50.403686   45889 command_runner.go:130] > # global_auth_file = ""
	I0729 01:38:50.403691   45889 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 01:38:50.403696   45889 command_runner.go:130] > # This option supports live configuration reload.
	I0729 01:38:50.403700   45889 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 01:38:50.403705   45889 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 01:38:50.403712   45889 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 01:38:50.403717   45889 command_runner.go:130] > # This option supports live configuration reload.
	I0729 01:38:50.403721   45889 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 01:38:50.403729   45889 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 01:38:50.403734   45889 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 01:38:50.403744   45889 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 01:38:50.403751   45889 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 01:38:50.403755   45889 command_runner.go:130] > # pause_command = "/pause"
	I0729 01:38:50.403761   45889 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 01:38:50.403769   45889 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 01:38:50.403774   45889 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 01:38:50.403780   45889 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 01:38:50.403785   45889 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 01:38:50.403793   45889 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 01:38:50.403796   45889 command_runner.go:130] > # pinned_images = [
	I0729 01:38:50.403800   45889 command_runner.go:130] > # ]
	I0729 01:38:50.403807   45889 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 01:38:50.403815   45889 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 01:38:50.403822   45889 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 01:38:50.403830   45889 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 01:38:50.403835   45889 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 01:38:50.403840   45889 command_runner.go:130] > # signature_policy = ""
	I0729 01:38:50.403846   45889 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 01:38:50.403857   45889 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 01:38:50.403865   45889 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 01:38:50.403872   45889 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 01:38:50.403878   45889 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 01:38:50.403885   45889 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 01:38:50.403891   45889 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 01:38:50.403899   45889 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 01:38:50.403903   45889 command_runner.go:130] > # changing them here.
	I0729 01:38:50.403908   45889 command_runner.go:130] > # insecure_registries = [
	I0729 01:38:50.403912   45889 command_runner.go:130] > # ]
	I0729 01:38:50.403919   45889 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 01:38:50.403924   45889 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 01:38:50.403931   45889 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 01:38:50.403936   45889 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 01:38:50.403942   45889 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 01:38:50.403947   45889 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 01:38:50.403953   45889 command_runner.go:130] > # CNI plugins.
	I0729 01:38:50.403957   45889 command_runner.go:130] > [crio.network]
	I0729 01:38:50.403963   45889 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 01:38:50.403970   45889 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 01:38:50.403976   45889 command_runner.go:130] > # cni_default_network = ""
	I0729 01:38:50.403981   45889 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 01:38:50.403987   45889 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 01:38:50.403993   45889 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 01:38:50.403998   45889 command_runner.go:130] > # plugin_dirs = [
	I0729 01:38:50.404002   45889 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 01:38:50.404007   45889 command_runner.go:130] > # ]
	I0729 01:38:50.404013   45889 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 01:38:50.404019   45889 command_runner.go:130] > [crio.metrics]
	I0729 01:38:50.404023   45889 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 01:38:50.404028   45889 command_runner.go:130] > enable_metrics = true
	I0729 01:38:50.404032   45889 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 01:38:50.404035   45889 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 01:38:50.404041   45889 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 01:38:50.404049   45889 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 01:38:50.404054   45889 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 01:38:50.404060   45889 command_runner.go:130] > # metrics_collectors = [
	I0729 01:38:50.404064   45889 command_runner.go:130] > # 	"operations",
	I0729 01:38:50.404070   45889 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 01:38:50.404075   45889 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 01:38:50.404081   45889 command_runner.go:130] > # 	"operations_errors",
	I0729 01:38:50.404085   45889 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 01:38:50.404090   45889 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 01:38:50.404094   45889 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 01:38:50.404100   45889 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 01:38:50.404107   45889 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 01:38:50.404111   45889 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 01:38:50.404117   45889 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 01:38:50.404121   45889 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 01:38:50.404127   45889 command_runner.go:130] > # 	"containers_oom_total",
	I0729 01:38:50.404131   45889 command_runner.go:130] > # 	"containers_oom",
	I0729 01:38:50.404137   45889 command_runner.go:130] > # 	"processes_defunct",
	I0729 01:38:50.404140   45889 command_runner.go:130] > # 	"operations_total",
	I0729 01:38:50.404145   45889 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 01:38:50.404149   45889 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 01:38:50.404156   45889 command_runner.go:130] > # 	"operations_errors_total",
	I0729 01:38:50.404160   45889 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 01:38:50.404166   45889 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 01:38:50.404171   45889 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 01:38:50.404177   45889 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 01:38:50.404182   45889 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 01:38:50.404188   45889 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 01:38:50.404192   45889 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 01:38:50.404198   45889 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 01:38:50.404204   45889 command_runner.go:130] > # ]
	I0729 01:38:50.404211   45889 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 01:38:50.404215   45889 command_runner.go:130] > # metrics_port = 9090
	I0729 01:38:50.404221   45889 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 01:38:50.404225   45889 command_runner.go:130] > # metrics_socket = ""
	I0729 01:38:50.404232   45889 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 01:38:50.404238   45889 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 01:38:50.404245   45889 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 01:38:50.404251   45889 command_runner.go:130] > # certificate on any modification event.
	I0729 01:38:50.404255   45889 command_runner.go:130] > # metrics_cert = ""
	I0729 01:38:50.404260   45889 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 01:38:50.404267   45889 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 01:38:50.404271   45889 command_runner.go:130] > # metrics_key = ""
	I0729 01:38:50.404278   45889 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 01:38:50.404282   45889 command_runner.go:130] > [crio.tracing]
	I0729 01:38:50.404288   45889 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 01:38:50.404293   45889 command_runner.go:130] > # enable_tracing = false
	I0729 01:38:50.404299   45889 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 01:38:50.404305   45889 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 01:38:50.404312   45889 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 01:38:50.404318   45889 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 01:38:50.404323   45889 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 01:38:50.404328   45889 command_runner.go:130] > [crio.nri]
	I0729 01:38:50.404333   45889 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 01:38:50.404336   45889 command_runner.go:130] > # enable_nri = false
	I0729 01:38:50.404341   45889 command_runner.go:130] > # NRI socket to listen on.
	I0729 01:38:50.404346   45889 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 01:38:50.404352   45889 command_runner.go:130] > # NRI plugin directory to use.
	I0729 01:38:50.404357   45889 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 01:38:50.404363   45889 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 01:38:50.404368   45889 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 01:38:50.404375   45889 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 01:38:50.404379   45889 command_runner.go:130] > # nri_disable_connections = false
	I0729 01:38:50.404384   45889 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 01:38:50.404391   45889 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 01:38:50.404395   45889 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 01:38:50.404402   45889 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 01:38:50.404407   45889 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 01:38:50.404413   45889 command_runner.go:130] > [crio.stats]
	I0729 01:38:50.404418   45889 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 01:38:50.404425   45889 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 01:38:50.404429   45889 command_runner.go:130] > # stats_collection_period = 0
	I0729 01:38:50.404993   45889 command_runner.go:130] ! time="2024-07-29 01:38:50.359340534Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 01:38:50.405023   45889 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 01:38:50.405143   45889 cni.go:84] Creating CNI manager for ""
	I0729 01:38:50.405155   45889 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 01:38:50.405164   45889 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 01:38:50.405181   45889 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.140 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-060411 NodeName:multinode-060411 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 01:38:50.405301   45889 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-060411"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 01:38:50.405356   45889 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 01:38:50.415733   45889 command_runner.go:130] > kubeadm
	I0729 01:38:50.415745   45889 command_runner.go:130] > kubectl
	I0729 01:38:50.415748   45889 command_runner.go:130] > kubelet
	I0729 01:38:50.415765   45889 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 01:38:50.415810   45889 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 01:38:50.425796   45889 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0729 01:38:50.442463   45889 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 01:38:50.458971   45889 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0729 01:38:50.476088   45889 ssh_runner.go:195] Run: grep 192.168.39.140	control-plane.minikube.internal$ /etc/hosts
	I0729 01:38:50.480006   45889 command_runner.go:130] > 192.168.39.140	control-plane.minikube.internal
	I0729 01:38:50.480061   45889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:38:50.624732   45889 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:38:50.640460   45889 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411 for IP: 192.168.39.140
	I0729 01:38:50.640489   45889 certs.go:194] generating shared ca certs ...
	I0729 01:38:50.640511   45889 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:38:50.640687   45889 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 01:38:50.640751   45889 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 01:38:50.640763   45889 certs.go:256] generating profile certs ...
	I0729 01:38:50.640866   45889 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/client.key
	I0729 01:38:50.640940   45889 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/apiserver.key.cce4d0cc
	I0729 01:38:50.640987   45889 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/proxy-client.key
	I0729 01:38:50.641002   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 01:38:50.641021   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 01:38:50.641046   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 01:38:50.641070   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 01:38:50.641087   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 01:38:50.641104   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 01:38:50.641117   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 01:38:50.641127   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 01:38:50.641179   45889 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 01:38:50.641207   45889 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 01:38:50.641215   45889 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 01:38:50.641235   45889 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 01:38:50.641257   45889 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 01:38:50.641276   45889 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 01:38:50.641316   45889 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:38:50.641352   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /usr/share/ca-certificates/166232.pem
	I0729 01:38:50.641368   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:38:50.641383   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem -> /usr/share/ca-certificates/16623.pem
	I0729 01:38:50.641957   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 01:38:50.667969   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 01:38:50.693452   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 01:38:50.718452   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 01:38:50.741433   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 01:38:50.764363   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 01:38:50.788361   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 01:38:50.812303   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 01:38:50.835346   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 01:38:50.858424   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 01:38:50.881115   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 01:38:50.904003   45889 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 01:38:50.920431   45889 ssh_runner.go:195] Run: openssl version
	I0729 01:38:50.926644   45889 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 01:38:50.926707   45889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 01:38:50.937683   45889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 01:38:50.941922   45889 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 01:38:50.942053   45889 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 01:38:50.942110   45889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 01:38:50.947433   45889 command_runner.go:130] > 51391683
	I0729 01:38:50.947684   45889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 01:38:50.957408   45889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 01:38:50.968064   45889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 01:38:50.972212   45889 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 01:38:50.972242   45889 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 01:38:50.972269   45889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 01:38:50.977757   45889 command_runner.go:130] > 3ec20f2e
	I0729 01:38:50.977808   45889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 01:38:50.988070   45889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 01:38:50.998713   45889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:38:51.002948   45889 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:38:51.002977   45889 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:38:51.003019   45889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:38:51.008729   45889 command_runner.go:130] > b5213941
	I0729 01:38:51.008813   45889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 01:38:51.018191   45889 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 01:38:51.022379   45889 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 01:38:51.022405   45889 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 01:38:51.022414   45889 command_runner.go:130] > Device: 253,1	Inode: 533291      Links: 1
	I0729 01:38:51.022423   45889 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 01:38:51.022432   45889 command_runner.go:130] > Access: 2024-07-29 01:31:50.386455805 +0000
	I0729 01:38:51.022443   45889 command_runner.go:130] > Modify: 2024-07-29 01:31:50.386455805 +0000
	I0729 01:38:51.022451   45889 command_runner.go:130] > Change: 2024-07-29 01:31:50.386455805 +0000
	I0729 01:38:51.022458   45889 command_runner.go:130] >  Birth: 2024-07-29 01:31:50.386455805 +0000
	I0729 01:38:51.022524   45889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 01:38:51.028877   45889 command_runner.go:130] > Certificate will not expire
	I0729 01:38:51.028957   45889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 01:38:51.034346   45889 command_runner.go:130] > Certificate will not expire
	I0729 01:38:51.034506   45889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 01:38:51.040285   45889 command_runner.go:130] > Certificate will not expire
	I0729 01:38:51.040335   45889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 01:38:51.046108   45889 command_runner.go:130] > Certificate will not expire
	I0729 01:38:51.046174   45889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 01:38:51.051674   45889 command_runner.go:130] > Certificate will not expire
	I0729 01:38:51.051734   45889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 01:38:51.056980   45889 command_runner.go:130] > Certificate will not expire
	I0729 01:38:51.057131   45889 kubeadm.go:392] StartCluster: {Name:multinode-060411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-060411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.190 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:38:51.057235   45889 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 01:38:51.057287   45889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 01:38:51.103358   45889 command_runner.go:130] > 18c694585ad583e71d9bf791d5e8265f1ad31313be119f7a0fc626f0424b0e54
	I0729 01:38:51.103390   45889 command_runner.go:130] > f9d2ae5c2528f022c85cce760818233b3c1a481a3791512f3721584a59ad7316
	I0729 01:38:51.103401   45889 command_runner.go:130] > a309347431949a047c73522b7a2b599b2895342273fcdd47644ed42ce01a16b1
	I0729 01:38:51.103410   45889 command_runner.go:130] > ef2d721b7d276811ff1da16e5522d610d1f22e22ca78fa5d2ce7b1b803ff655e
	I0729 01:38:51.103419   45889 command_runner.go:130] > bcee73846b860be33849e164a556e11985b5265784ca290324b297140699a1de
	I0729 01:38:51.103428   45889 command_runner.go:130] > a841bfa674c596af8d9a8081805f72f20e4bbd11af6323a38a626a429158f2b4
	I0729 01:38:51.103438   45889 command_runner.go:130] > ded106b4ad30b0a5a3e4f673f331c2c718da977fbbab17cf9305ea41e88a02fd
	I0729 01:38:51.103471   45889 command_runner.go:130] > c9289f8f5185e0bd4b1bf3bc7dc0c588f0eb55fb5ce88ba34069936ab9877ab8
	I0729 01:38:51.103497   45889 cri.go:89] found id: "18c694585ad583e71d9bf791d5e8265f1ad31313be119f7a0fc626f0424b0e54"
	I0729 01:38:51.103506   45889 cri.go:89] found id: "f9d2ae5c2528f022c85cce760818233b3c1a481a3791512f3721584a59ad7316"
	I0729 01:38:51.103510   45889 cri.go:89] found id: "a309347431949a047c73522b7a2b599b2895342273fcdd47644ed42ce01a16b1"
	I0729 01:38:51.103514   45889 cri.go:89] found id: "ef2d721b7d276811ff1da16e5522d610d1f22e22ca78fa5d2ce7b1b803ff655e"
	I0729 01:38:51.103517   45889 cri.go:89] found id: "bcee73846b860be33849e164a556e11985b5265784ca290324b297140699a1de"
	I0729 01:38:51.103521   45889 cri.go:89] found id: "a841bfa674c596af8d9a8081805f72f20e4bbd11af6323a38a626a429158f2b4"
	I0729 01:38:51.103526   45889 cri.go:89] found id: "ded106b4ad30b0a5a3e4f673f331c2c718da977fbbab17cf9305ea41e88a02fd"
	I0729 01:38:51.103529   45889 cri.go:89] found id: "c9289f8f5185e0bd4b1bf3bc7dc0c588f0eb55fb5ce88ba34069936ab9877ab8"
	I0729 01:38:51.103531   45889 cri.go:89] found id: ""
	I0729 01:38:51.103580   45889 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.220505250Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722217240220483338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6569fde-e31d-422f-be2d-ba9b3f408d38 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.220940654Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40030ad2-cbcc-4689-9bed-f6a22491a02f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.221073432Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40030ad2-cbcc-4689-9bed-f6a22491a02f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.221511762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4df6a6ff674dd8033a33202610a0f16d48c77e0cada9eb311619083085a9261d,PodSandboxId:405ee6e785e7998ec4bbbfb51cdc97159c62367cd2546f4b5506ef13bf5771ac,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722217171666491621,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lfmwp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11b7cc27-3dde-47b9-afd8-649382e4ad37,},Annotations:map[string]string{io.kubernetes.container.hash: cab4818a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae351cb3c920c2080c10a102d162cdfc9004a93dfb6bb88c4d8fddf893b0d442,PodSandboxId:e0f07bc4f18a758cb5d774db87eab2d4784f5d186658f9ac8cae5585da52d6ea,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722217138198469252,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8csbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd59518-57af-4a69-8697-f7fbb6a51b5e,},Annotations:map[string]string{io.kubernetes.container.hash: ae1b45d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba2a2fb41f03c57c8bad8e8b905876fd3f895b7bd308a6e1b679d57f6e2e4ae,PodSandboxId:67b79e167c379c2f04c49debc43808fe1f5d38644f688827d42acb22b464ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722217138039587135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnz72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f01ff-afc1-464e-a3f2-e7b7d11203ad,},Annotations:map[string]string{io.kubernetes.container.hash: 2bbf6752,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ce34270fcd5154c5d87a9dab0259ba588d62fdb1cf925d42a21c2892c06846,PodSandboxId:30c77960c991c68bfb605ec4b96f0a166d3ce8ab8bb1902fefbda25f18a33a02,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722217137952062429,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7k6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fe5ccd-46a0-4197-a687-af0fca1f518d,},Annotations:map[string]
string{io.kubernetes.container.hash: a1fc7fcb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dd0c12ffb6b7317cd7fd021123eb9ee9e6c15c1b638f2bcf66703c57011ddd8,PodSandboxId:7eca6d986748f1d672526ce7dcb3b5e1be1fb2bb630528fea17fe01f189edbb8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722217137968648862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83dec14c-5f93-4dee-bb62-52cae06307f7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 50e1a7c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299e426be4a5586af1a511622ce44a07ee065f9ac07dadee9a2d3975b4ceaeb9,PodSandboxId:61f6c9eb7c53d44ad131dba84f1fffb58330dd032f0899f56676ed03c9983108,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722217133135442803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d046bb15b263685df44f0950ef7600a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed1f7e7,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73231afd5e9252c0fa26b35dd94ded018f615b311998c54a013e637864ee0fb,PodSandboxId:d2683e5e7e5d677b5c594d005f50dca618f583c1fce990a98033d3b20f43f37f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722217133121108172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46e732ea87b9c252d00d262efc8b3fe8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6add5858278eb1e54d49eb8e86219474ef87996825592ed5e50f1f39ad079277,PodSandboxId:7e5ba26a31f116ded4c6e5d444cc44c731ceb48d425ee5890ab9f58cdcaeb6a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722217133073237610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcff5185417c81d7d28fd554089e4bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff07df0b86935ecddcd897c463f867356e3b9b3c0e3efbd5c2569ca981c25b4,PodSandboxId:de0369d916d142fd12ee337ab45d1e83672edab558988b97db4f17871173f4e0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722217133020266652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a268c1ff6ce2ff894eb8597f240e527e,},Annotations:map[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf650ce9fd8f2b55a3d9d3c70320ed1f30e2531022ce42278abb7767ae0407e6,PodSandboxId:4fb572bd767897446a4b3edb1570d02d9b06e6ee7331b9ca273a4dc2fa57c98a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722216806207586068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lfmwp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11b7cc27-3dde-47b9-afd8-649382e4ad37,},Annotations:map[string]string{io.kubernetes.container.hash: cab4818a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c694585ad583e71d9bf791d5e8265f1ad31313be119f7a0fc626f0424b0e54,PodSandboxId:61f6ff5d6d8d140d466c4d97960eb698d853c3e9779f704a5c02ae38697804fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722216750095083997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnz72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f01ff-afc1-464e-a3f2-e7b7d11203ad,},Annotations:map[string]string{io.kubernetes.container.hash: 2bbf6752,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d2ae5c2528f022c85cce760818233b3c1a481a3791512f3721584a59ad7316,PodSandboxId:8d777617fd166695c41da97bd8c161db023ad04c8fd440b47dd834935bab25ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722216749771639550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 83dec14c-5f93-4dee-bb62-52cae06307f7,},Annotations:map[string]string{io.kubernetes.container.hash: 50e1a7c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a309347431949a047c73522b7a2b599b2895342273fcdd47644ed42ce01a16b1,PodSandboxId:bf7dd7f13d3047665b6b15de3a78cc7b9a3f73d19386a1fa3196c0f59ac0906b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722216737871867839,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8csbb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 6fd59518-57af-4a69-8697-f7fbb6a51b5e,},Annotations:map[string]string{io.kubernetes.container.hash: ae1b45d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef2d721b7d276811ff1da16e5522d610d1f22e22ca78fa5d2ce7b1b803ff655e,PodSandboxId:f2093ca047b64c97dd8e41a53f8aceb05222562d1ed159915625cc373c2e578f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722216734184544509,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7k6j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 18fe5ccd-46a0-4197-a687-af0fca1f518d,},Annotations:map[string]string{io.kubernetes.container.hash: a1fc7fcb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcee73846b860be33849e164a556e11985b5265784ca290324b297140699a1de,PodSandboxId:3c5353bc3d71912460125f54eb2dbaa41c60cc780529c947fc4589fbda64c02a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722216714494739513,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d046bb15b263685df44f0950ef760
0a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed1f7e7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a841bfa674c596af8d9a8081805f72f20e4bbd11af6323a38a626a429158f2b4,PodSandboxId:89c607e32a889420acea1fd60a82823ebaac89b77db23a2a07cadee800264715,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722216714493846493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46e732ea87b9c252d0
0d262efc8b3fe8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded106b4ad30b0a5a3e4f673f331c2c718da977fbbab17cf9305ea41e88a02fd,PodSandboxId:b3524d640c8536a390a5adf97d64a96b386284af9d0b0b45095475f0dfc63dc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722216714482338978,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcff5185417c81d7d28fd554089e4bd9,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9289f8f5185e0bd4b1bf3bc7dc0c588f0eb55fb5ce88ba34069936ab9877ab8,PodSandboxId:1ec80da5a502788fccad616b49c4e3e655ebf6a622d390c2e7af479374bb4e83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722216714386150424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a268c1ff6ce2ff894eb8597f240e527e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40030ad2-cbcc-4689-9bed-f6a22491a02f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.263490146Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=405d39a3-a8d5-44fa-ba39-99a5c359e318 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.263697178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=405d39a3-a8d5-44fa-ba39-99a5c359e318 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.265019890Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=696ce7f0-13ea-42f0-97ec-fe26508ed801 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.265430068Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722217240265408369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=696ce7f0-13ea-42f0-97ec-fe26508ed801 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.266024617Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1dee0d9-aec1-4413-b978-aa236018dab0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.266096418Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1dee0d9-aec1-4413-b978-aa236018dab0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.266532474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4df6a6ff674dd8033a33202610a0f16d48c77e0cada9eb311619083085a9261d,PodSandboxId:405ee6e785e7998ec4bbbfb51cdc97159c62367cd2546f4b5506ef13bf5771ac,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722217171666491621,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lfmwp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11b7cc27-3dde-47b9-afd8-649382e4ad37,},Annotations:map[string]string{io.kubernetes.container.hash: cab4818a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae351cb3c920c2080c10a102d162cdfc9004a93dfb6bb88c4d8fddf893b0d442,PodSandboxId:e0f07bc4f18a758cb5d774db87eab2d4784f5d186658f9ac8cae5585da52d6ea,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722217138198469252,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8csbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd59518-57af-4a69-8697-f7fbb6a51b5e,},Annotations:map[string]string{io.kubernetes.container.hash: ae1b45d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba2a2fb41f03c57c8bad8e8b905876fd3f895b7bd308a6e1b679d57f6e2e4ae,PodSandboxId:67b79e167c379c2f04c49debc43808fe1f5d38644f688827d42acb22b464ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722217138039587135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnz72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f01ff-afc1-464e-a3f2-e7b7d11203ad,},Annotations:map[string]string{io.kubernetes.container.hash: 2bbf6752,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ce34270fcd5154c5d87a9dab0259ba588d62fdb1cf925d42a21c2892c06846,PodSandboxId:30c77960c991c68bfb605ec4b96f0a166d3ce8ab8bb1902fefbda25f18a33a02,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722217137952062429,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7k6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fe5ccd-46a0-4197-a687-af0fca1f518d,},Annotations:map[string]
string{io.kubernetes.container.hash: a1fc7fcb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dd0c12ffb6b7317cd7fd021123eb9ee9e6c15c1b638f2bcf66703c57011ddd8,PodSandboxId:7eca6d986748f1d672526ce7dcb3b5e1be1fb2bb630528fea17fe01f189edbb8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722217137968648862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83dec14c-5f93-4dee-bb62-52cae06307f7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 50e1a7c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299e426be4a5586af1a511622ce44a07ee065f9ac07dadee9a2d3975b4ceaeb9,PodSandboxId:61f6c9eb7c53d44ad131dba84f1fffb58330dd032f0899f56676ed03c9983108,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722217133135442803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d046bb15b263685df44f0950ef7600a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed1f7e7,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73231afd5e9252c0fa26b35dd94ded018f615b311998c54a013e637864ee0fb,PodSandboxId:d2683e5e7e5d677b5c594d005f50dca618f583c1fce990a98033d3b20f43f37f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722217133121108172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46e732ea87b9c252d00d262efc8b3fe8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6add5858278eb1e54d49eb8e86219474ef87996825592ed5e50f1f39ad079277,PodSandboxId:7e5ba26a31f116ded4c6e5d444cc44c731ceb48d425ee5890ab9f58cdcaeb6a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722217133073237610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcff5185417c81d7d28fd554089e4bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff07df0b86935ecddcd897c463f867356e3b9b3c0e3efbd5c2569ca981c25b4,PodSandboxId:de0369d916d142fd12ee337ab45d1e83672edab558988b97db4f17871173f4e0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722217133020266652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a268c1ff6ce2ff894eb8597f240e527e,},Annotations:map[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf650ce9fd8f2b55a3d9d3c70320ed1f30e2531022ce42278abb7767ae0407e6,PodSandboxId:4fb572bd767897446a4b3edb1570d02d9b06e6ee7331b9ca273a4dc2fa57c98a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722216806207586068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lfmwp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11b7cc27-3dde-47b9-afd8-649382e4ad37,},Annotations:map[string]string{io.kubernetes.container.hash: cab4818a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c694585ad583e71d9bf791d5e8265f1ad31313be119f7a0fc626f0424b0e54,PodSandboxId:61f6ff5d6d8d140d466c4d97960eb698d853c3e9779f704a5c02ae38697804fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722216750095083997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnz72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f01ff-afc1-464e-a3f2-e7b7d11203ad,},Annotations:map[string]string{io.kubernetes.container.hash: 2bbf6752,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d2ae5c2528f022c85cce760818233b3c1a481a3791512f3721584a59ad7316,PodSandboxId:8d777617fd166695c41da97bd8c161db023ad04c8fd440b47dd834935bab25ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722216749771639550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 83dec14c-5f93-4dee-bb62-52cae06307f7,},Annotations:map[string]string{io.kubernetes.container.hash: 50e1a7c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a309347431949a047c73522b7a2b599b2895342273fcdd47644ed42ce01a16b1,PodSandboxId:bf7dd7f13d3047665b6b15de3a78cc7b9a3f73d19386a1fa3196c0f59ac0906b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722216737871867839,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8csbb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 6fd59518-57af-4a69-8697-f7fbb6a51b5e,},Annotations:map[string]string{io.kubernetes.container.hash: ae1b45d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef2d721b7d276811ff1da16e5522d610d1f22e22ca78fa5d2ce7b1b803ff655e,PodSandboxId:f2093ca047b64c97dd8e41a53f8aceb05222562d1ed159915625cc373c2e578f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722216734184544509,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7k6j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 18fe5ccd-46a0-4197-a687-af0fca1f518d,},Annotations:map[string]string{io.kubernetes.container.hash: a1fc7fcb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcee73846b860be33849e164a556e11985b5265784ca290324b297140699a1de,PodSandboxId:3c5353bc3d71912460125f54eb2dbaa41c60cc780529c947fc4589fbda64c02a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722216714494739513,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d046bb15b263685df44f0950ef760
0a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed1f7e7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a841bfa674c596af8d9a8081805f72f20e4bbd11af6323a38a626a429158f2b4,PodSandboxId:89c607e32a889420acea1fd60a82823ebaac89b77db23a2a07cadee800264715,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722216714493846493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46e732ea87b9c252d0
0d262efc8b3fe8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded106b4ad30b0a5a3e4f673f331c2c718da977fbbab17cf9305ea41e88a02fd,PodSandboxId:b3524d640c8536a390a5adf97d64a96b386284af9d0b0b45095475f0dfc63dc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722216714482338978,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcff5185417c81d7d28fd554089e4bd9,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9289f8f5185e0bd4b1bf3bc7dc0c588f0eb55fb5ce88ba34069936ab9877ab8,PodSandboxId:1ec80da5a502788fccad616b49c4e3e655ebf6a622d390c2e7af479374bb4e83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722216714386150424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a268c1ff6ce2ff894eb8597f240e527e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1dee0d9-aec1-4413-b978-aa236018dab0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.306823329Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c719dc1-a697-4ddd-8ade-1e5f9645467e name=/runtime.v1.RuntimeService/Version
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.306914601Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c719dc1-a697-4ddd-8ade-1e5f9645467e name=/runtime.v1.RuntimeService/Version
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.308294799Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=75feb8e4-7060-435f-8397-9682b4a94da4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.308725783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722217240308702722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75feb8e4-7060-435f-8397-9682b4a94da4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.309183964Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75a3dbd8-cd2c-4cc4-9cab-835217d42c36 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.309260617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75a3dbd8-cd2c-4cc4-9cab-835217d42c36 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.309626240Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4df6a6ff674dd8033a33202610a0f16d48c77e0cada9eb311619083085a9261d,PodSandboxId:405ee6e785e7998ec4bbbfb51cdc97159c62367cd2546f4b5506ef13bf5771ac,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722217171666491621,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lfmwp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11b7cc27-3dde-47b9-afd8-649382e4ad37,},Annotations:map[string]string{io.kubernetes.container.hash: cab4818a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae351cb3c920c2080c10a102d162cdfc9004a93dfb6bb88c4d8fddf893b0d442,PodSandboxId:e0f07bc4f18a758cb5d774db87eab2d4784f5d186658f9ac8cae5585da52d6ea,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722217138198469252,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8csbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd59518-57af-4a69-8697-f7fbb6a51b5e,},Annotations:map[string]string{io.kubernetes.container.hash: ae1b45d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba2a2fb41f03c57c8bad8e8b905876fd3f895b7bd308a6e1b679d57f6e2e4ae,PodSandboxId:67b79e167c379c2f04c49debc43808fe1f5d38644f688827d42acb22b464ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722217138039587135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnz72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f01ff-afc1-464e-a3f2-e7b7d11203ad,},Annotations:map[string]string{io.kubernetes.container.hash: 2bbf6752,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ce34270fcd5154c5d87a9dab0259ba588d62fdb1cf925d42a21c2892c06846,PodSandboxId:30c77960c991c68bfb605ec4b96f0a166d3ce8ab8bb1902fefbda25f18a33a02,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722217137952062429,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7k6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fe5ccd-46a0-4197-a687-af0fca1f518d,},Annotations:map[string]
string{io.kubernetes.container.hash: a1fc7fcb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dd0c12ffb6b7317cd7fd021123eb9ee9e6c15c1b638f2bcf66703c57011ddd8,PodSandboxId:7eca6d986748f1d672526ce7dcb3b5e1be1fb2bb630528fea17fe01f189edbb8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722217137968648862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83dec14c-5f93-4dee-bb62-52cae06307f7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 50e1a7c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299e426be4a5586af1a511622ce44a07ee065f9ac07dadee9a2d3975b4ceaeb9,PodSandboxId:61f6c9eb7c53d44ad131dba84f1fffb58330dd032f0899f56676ed03c9983108,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722217133135442803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d046bb15b263685df44f0950ef7600a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed1f7e7,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73231afd5e9252c0fa26b35dd94ded018f615b311998c54a013e637864ee0fb,PodSandboxId:d2683e5e7e5d677b5c594d005f50dca618f583c1fce990a98033d3b20f43f37f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722217133121108172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46e732ea87b9c252d00d262efc8b3fe8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6add5858278eb1e54d49eb8e86219474ef87996825592ed5e50f1f39ad079277,PodSandboxId:7e5ba26a31f116ded4c6e5d444cc44c731ceb48d425ee5890ab9f58cdcaeb6a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722217133073237610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcff5185417c81d7d28fd554089e4bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff07df0b86935ecddcd897c463f867356e3b9b3c0e3efbd5c2569ca981c25b4,PodSandboxId:de0369d916d142fd12ee337ab45d1e83672edab558988b97db4f17871173f4e0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722217133020266652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a268c1ff6ce2ff894eb8597f240e527e,},Annotations:map[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf650ce9fd8f2b55a3d9d3c70320ed1f30e2531022ce42278abb7767ae0407e6,PodSandboxId:4fb572bd767897446a4b3edb1570d02d9b06e6ee7331b9ca273a4dc2fa57c98a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722216806207586068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lfmwp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11b7cc27-3dde-47b9-afd8-649382e4ad37,},Annotations:map[string]string{io.kubernetes.container.hash: cab4818a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c694585ad583e71d9bf791d5e8265f1ad31313be119f7a0fc626f0424b0e54,PodSandboxId:61f6ff5d6d8d140d466c4d97960eb698d853c3e9779f704a5c02ae38697804fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722216750095083997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnz72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f01ff-afc1-464e-a3f2-e7b7d11203ad,},Annotations:map[string]string{io.kubernetes.container.hash: 2bbf6752,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d2ae5c2528f022c85cce760818233b3c1a481a3791512f3721584a59ad7316,PodSandboxId:8d777617fd166695c41da97bd8c161db023ad04c8fd440b47dd834935bab25ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722216749771639550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 83dec14c-5f93-4dee-bb62-52cae06307f7,},Annotations:map[string]string{io.kubernetes.container.hash: 50e1a7c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a309347431949a047c73522b7a2b599b2895342273fcdd47644ed42ce01a16b1,PodSandboxId:bf7dd7f13d3047665b6b15de3a78cc7b9a3f73d19386a1fa3196c0f59ac0906b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722216737871867839,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8csbb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 6fd59518-57af-4a69-8697-f7fbb6a51b5e,},Annotations:map[string]string{io.kubernetes.container.hash: ae1b45d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef2d721b7d276811ff1da16e5522d610d1f22e22ca78fa5d2ce7b1b803ff655e,PodSandboxId:f2093ca047b64c97dd8e41a53f8aceb05222562d1ed159915625cc373c2e578f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722216734184544509,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7k6j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 18fe5ccd-46a0-4197-a687-af0fca1f518d,},Annotations:map[string]string{io.kubernetes.container.hash: a1fc7fcb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcee73846b860be33849e164a556e11985b5265784ca290324b297140699a1de,PodSandboxId:3c5353bc3d71912460125f54eb2dbaa41c60cc780529c947fc4589fbda64c02a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722216714494739513,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d046bb15b263685df44f0950ef760
0a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed1f7e7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a841bfa674c596af8d9a8081805f72f20e4bbd11af6323a38a626a429158f2b4,PodSandboxId:89c607e32a889420acea1fd60a82823ebaac89b77db23a2a07cadee800264715,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722216714493846493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46e732ea87b9c252d0
0d262efc8b3fe8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded106b4ad30b0a5a3e4f673f331c2c718da977fbbab17cf9305ea41e88a02fd,PodSandboxId:b3524d640c8536a390a5adf97d64a96b386284af9d0b0b45095475f0dfc63dc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722216714482338978,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcff5185417c81d7d28fd554089e4bd9,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9289f8f5185e0bd4b1bf3bc7dc0c588f0eb55fb5ce88ba34069936ab9877ab8,PodSandboxId:1ec80da5a502788fccad616b49c4e3e655ebf6a622d390c2e7af479374bb4e83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722216714386150424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a268c1ff6ce2ff894eb8597f240e527e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75a3dbd8-cd2c-4cc4-9cab-835217d42c36 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.351357333Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c9fc807-1898-4ae8-a81e-db4ac054e755 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.351450946Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c9fc807-1898-4ae8-a81e-db4ac054e755 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.352721647Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d04fa20e-2be3-47df-8b16-699652690f68 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.353260619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722217240353238121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d04fa20e-2be3-47df-8b16-699652690f68 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.353699853Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c6328c14-e8a4-4229-88c7-e569f8821fc6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.353773153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c6328c14-e8a4-4229-88c7-e569f8821fc6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:40:40 multinode-060411 crio[2875]: time="2024-07-29 01:40:40.354260195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4df6a6ff674dd8033a33202610a0f16d48c77e0cada9eb311619083085a9261d,PodSandboxId:405ee6e785e7998ec4bbbfb51cdc97159c62367cd2546f4b5506ef13bf5771ac,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722217171666491621,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lfmwp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11b7cc27-3dde-47b9-afd8-649382e4ad37,},Annotations:map[string]string{io.kubernetes.container.hash: cab4818a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae351cb3c920c2080c10a102d162cdfc9004a93dfb6bb88c4d8fddf893b0d442,PodSandboxId:e0f07bc4f18a758cb5d774db87eab2d4784f5d186658f9ac8cae5585da52d6ea,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722217138198469252,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8csbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd59518-57af-4a69-8697-f7fbb6a51b5e,},Annotations:map[string]string{io.kubernetes.container.hash: ae1b45d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba2a2fb41f03c57c8bad8e8b905876fd3f895b7bd308a6e1b679d57f6e2e4ae,PodSandboxId:67b79e167c379c2f04c49debc43808fe1f5d38644f688827d42acb22b464ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722217138039587135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnz72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f01ff-afc1-464e-a3f2-e7b7d11203ad,},Annotations:map[string]string{io.kubernetes.container.hash: 2bbf6752,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ce34270fcd5154c5d87a9dab0259ba588d62fdb1cf925d42a21c2892c06846,PodSandboxId:30c77960c991c68bfb605ec4b96f0a166d3ce8ab8bb1902fefbda25f18a33a02,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722217137952062429,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7k6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fe5ccd-46a0-4197-a687-af0fca1f518d,},Annotations:map[string]
string{io.kubernetes.container.hash: a1fc7fcb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dd0c12ffb6b7317cd7fd021123eb9ee9e6c15c1b638f2bcf66703c57011ddd8,PodSandboxId:7eca6d986748f1d672526ce7dcb3b5e1be1fb2bb630528fea17fe01f189edbb8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722217137968648862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83dec14c-5f93-4dee-bb62-52cae06307f7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 50e1a7c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299e426be4a5586af1a511622ce44a07ee065f9ac07dadee9a2d3975b4ceaeb9,PodSandboxId:61f6c9eb7c53d44ad131dba84f1fffb58330dd032f0899f56676ed03c9983108,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722217133135442803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d046bb15b263685df44f0950ef7600a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed1f7e7,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73231afd5e9252c0fa26b35dd94ded018f615b311998c54a013e637864ee0fb,PodSandboxId:d2683e5e7e5d677b5c594d005f50dca618f583c1fce990a98033d3b20f43f37f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722217133121108172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46e732ea87b9c252d00d262efc8b3fe8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6add5858278eb1e54d49eb8e86219474ef87996825592ed5e50f1f39ad079277,PodSandboxId:7e5ba26a31f116ded4c6e5d444cc44c731ceb48d425ee5890ab9f58cdcaeb6a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722217133073237610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcff5185417c81d7d28fd554089e4bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff07df0b86935ecddcd897c463f867356e3b9b3c0e3efbd5c2569ca981c25b4,PodSandboxId:de0369d916d142fd12ee337ab45d1e83672edab558988b97db4f17871173f4e0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722217133020266652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a268c1ff6ce2ff894eb8597f240e527e,},Annotations:map[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf650ce9fd8f2b55a3d9d3c70320ed1f30e2531022ce42278abb7767ae0407e6,PodSandboxId:4fb572bd767897446a4b3edb1570d02d9b06e6ee7331b9ca273a4dc2fa57c98a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722216806207586068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lfmwp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11b7cc27-3dde-47b9-afd8-649382e4ad37,},Annotations:map[string]string{io.kubernetes.container.hash: cab4818a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c694585ad583e71d9bf791d5e8265f1ad31313be119f7a0fc626f0424b0e54,PodSandboxId:61f6ff5d6d8d140d466c4d97960eb698d853c3e9779f704a5c02ae38697804fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722216750095083997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnz72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f01ff-afc1-464e-a3f2-e7b7d11203ad,},Annotations:map[string]string{io.kubernetes.container.hash: 2bbf6752,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d2ae5c2528f022c85cce760818233b3c1a481a3791512f3721584a59ad7316,PodSandboxId:8d777617fd166695c41da97bd8c161db023ad04c8fd440b47dd834935bab25ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722216749771639550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 83dec14c-5f93-4dee-bb62-52cae06307f7,},Annotations:map[string]string{io.kubernetes.container.hash: 50e1a7c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a309347431949a047c73522b7a2b599b2895342273fcdd47644ed42ce01a16b1,PodSandboxId:bf7dd7f13d3047665b6b15de3a78cc7b9a3f73d19386a1fa3196c0f59ac0906b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722216737871867839,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8csbb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 6fd59518-57af-4a69-8697-f7fbb6a51b5e,},Annotations:map[string]string{io.kubernetes.container.hash: ae1b45d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef2d721b7d276811ff1da16e5522d610d1f22e22ca78fa5d2ce7b1b803ff655e,PodSandboxId:f2093ca047b64c97dd8e41a53f8aceb05222562d1ed159915625cc373c2e578f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722216734184544509,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7k6j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 18fe5ccd-46a0-4197-a687-af0fca1f518d,},Annotations:map[string]string{io.kubernetes.container.hash: a1fc7fcb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcee73846b860be33849e164a556e11985b5265784ca290324b297140699a1de,PodSandboxId:3c5353bc3d71912460125f54eb2dbaa41c60cc780529c947fc4589fbda64c02a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722216714494739513,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d046bb15b263685df44f0950ef760
0a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed1f7e7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a841bfa674c596af8d9a8081805f72f20e4bbd11af6323a38a626a429158f2b4,PodSandboxId:89c607e32a889420acea1fd60a82823ebaac89b77db23a2a07cadee800264715,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722216714493846493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46e732ea87b9c252d0
0d262efc8b3fe8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded106b4ad30b0a5a3e4f673f331c2c718da977fbbab17cf9305ea41e88a02fd,PodSandboxId:b3524d640c8536a390a5adf97d64a96b386284af9d0b0b45095475f0dfc63dc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722216714482338978,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcff5185417c81d7d28fd554089e4bd9,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9289f8f5185e0bd4b1bf3bc7dc0c588f0eb55fb5ce88ba34069936ab9877ab8,PodSandboxId:1ec80da5a502788fccad616b49c4e3e655ebf6a622d390c2e7af479374bb4e83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722216714386150424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a268c1ff6ce2ff894eb8597f240e527e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c6328c14-e8a4-4229-88c7-e569f8821fc6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4df6a6ff674dd       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   405ee6e785e79       busybox-fc5497c4f-lfmwp
	ae351cb3c920c       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   e0f07bc4f18a7       kindnet-8csbb
	1ba2a2fb41f03       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   67b79e167c379       coredns-7db6d8ff4d-mnz72
	3dd0c12ffb6b7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   7eca6d986748f       storage-provisioner
	51ce34270fcd5       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   30c77960c991c       kube-proxy-k7k6j
	299e426be4a55       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   61f6c9eb7c53d       etcd-multinode-060411
	e73231afd5e92       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   d2683e5e7e5d6       kube-controller-manager-multinode-060411
	6add5858278eb       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   7e5ba26a31f11       kube-scheduler-multinode-060411
	bff07df0b8693       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   de0369d916d14       kube-apiserver-multinode-060411
	bf650ce9fd8f2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   4fb572bd76789       busybox-fc5497c4f-lfmwp
	18c694585ad58       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   61f6ff5d6d8d1       coredns-7db6d8ff4d-mnz72
	f9d2ae5c2528f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   8d777617fd166       storage-provisioner
	a309347431949       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   bf7dd7f13d304       kindnet-8csbb
	ef2d721b7d276       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   f2093ca047b64       kube-proxy-k7k6j
	bcee73846b860       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   3c5353bc3d719       etcd-multinode-060411
	a841bfa674c59       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   89c607e32a889       kube-controller-manager-multinode-060411
	ded106b4ad30b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   b3524d640c853       kube-scheduler-multinode-060411
	c9289f8f5185e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   1ec80da5a5027       kube-apiserver-multinode-060411
	
	
	==> coredns [18c694585ad583e71d9bf791d5e8265f1ad31313be119f7a0fc626f0424b0e54] <==
	[INFO] 10.244.1.2:50982 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001791769s
	[INFO] 10.244.1.2:59436 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135076s
	[INFO] 10.244.1.2:41003 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156762s
	[INFO] 10.244.1.2:35882 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001348049s
	[INFO] 10.244.1.2:59947 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097365s
	[INFO] 10.244.1.2:41222 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097729s
	[INFO] 10.244.1.2:37489 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093826s
	[INFO] 10.244.0.3:48014 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146981s
	[INFO] 10.244.0.3:36554 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144887s
	[INFO] 10.244.0.3:53982 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061017s
	[INFO] 10.244.0.3:33894 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067377s
	[INFO] 10.244.1.2:54271 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125176s
	[INFO] 10.244.1.2:45884 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000208461s
	[INFO] 10.244.1.2:34031 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142101s
	[INFO] 10.244.1.2:38095 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108016s
	[INFO] 10.244.0.3:39252 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114871s
	[INFO] 10.244.0.3:57701 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011446s
	[INFO] 10.244.0.3:39879 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091184s
	[INFO] 10.244.0.3:44400 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000083292s
	[INFO] 10.244.1.2:52519 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209241s
	[INFO] 10.244.1.2:53110 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087666s
	[INFO] 10.244.1.2:36160 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000085603s
	[INFO] 10.244.1.2:52796 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085467s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [1ba2a2fb41f03c57c8bad8e8b905876fd3f895b7bd308a6e1b679d57f6e2e4ae] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45320 - 41229 "HINFO IN 4713983434540112245.3086507766231972451. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018640909s
	
	
	==> describe nodes <==
	Name:               multinode-060411
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-060411
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=multinode-060411
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T01_32_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:31:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-060411
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:40:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:38:56 +0000   Mon, 29 Jul 2024 01:31:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:38:56 +0000   Mon, 29 Jul 2024 01:31:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:38:56 +0000   Mon, 29 Jul 2024 01:31:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:38:56 +0000   Mon, 29 Jul 2024 01:32:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.140
	  Hostname:    multinode-060411
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5b41fcf44c544a95a0b0c5c93894c9e5
	  System UUID:                5b41fcf4-4c54-4a95-a0b0-c5c93894c9e5
	  Boot ID:                    374b4634-fa11-4285-a07e-7da972ab5925
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lfmwp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	  kube-system                 coredns-7db6d8ff4d-mnz72                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m27s
	  kube-system                 etcd-multinode-060411                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m41s
	  kube-system                 kindnet-8csbb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m27s
	  kube-system                 kube-apiserver-multinode-060411             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m41s
	  kube-system                 kube-controller-manager-multinode-060411    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m41s
	  kube-system                 kube-proxy-k7k6j                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 kube-scheduler-multinode-060411             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m41s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m26s                  kube-proxy       
	  Normal  Starting                 102s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  8m47s (x8 over 8m47s)  kubelet          Node multinode-060411 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m47s (x8 over 8m47s)  kubelet          Node multinode-060411 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m47s (x7 over 8m47s)  kubelet          Node multinode-060411 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m41s                  kubelet          Node multinode-060411 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m41s                  kubelet          Node multinode-060411 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m41s                  kubelet          Node multinode-060411 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m41s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m28s                  node-controller  Node multinode-060411 event: Registered Node multinode-060411 in Controller
	  Normal  NodeReady                8m11s                  kubelet          Node multinode-060411 status is now: NodeReady
	  Normal  Starting                 108s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s (x8 over 108s)    kubelet          Node multinode-060411 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s (x8 over 108s)    kubelet          Node multinode-060411 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s (x7 over 108s)    kubelet          Node multinode-060411 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  108s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           91s                    node-controller  Node multinode-060411 event: Registered Node multinode-060411 in Controller
	
	
	Name:               multinode-060411-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-060411-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=multinode-060411
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T01_39_38_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:39:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-060411-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:40:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:40:09 +0000   Mon, 29 Jul 2024 01:39:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:40:09 +0000   Mon, 29 Jul 2024 01:39:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:40:09 +0000   Mon, 29 Jul 2024 01:39:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:40:09 +0000   Mon, 29 Jul 2024 01:39:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.233
	  Hostname:    multinode-060411-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 96fba407efd741e89bdfe057767df496
	  System UUID:                96fba407-efd7-41e8-9bdf-e057767df496
	  Boot ID:                    97542112-0e7f-4f39-981e-46c37d6d4d97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8n5zk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kindnet-4k724              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m40s
	  kube-system                 kube-proxy-ck46f           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m34s                  kube-proxy  
	  Normal  Starting                 57s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m40s (x2 over 7m40s)  kubelet     Node multinode-060411-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m40s (x2 over 7m40s)  kubelet     Node multinode-060411-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m40s (x2 over 7m40s)  kubelet     Node multinode-060411-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m40s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m20s                  kubelet     Node multinode-060411-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet     Node multinode-060411-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet     Node multinode-060411-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet     Node multinode-060411-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-060411-m02 status is now: NodeReady
	
	
	Name:               multinode-060411-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-060411-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=multinode-060411
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T01_40_18_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:40:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-060411-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:40:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:40:37 +0000   Mon, 29 Jul 2024 01:40:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:40:37 +0000   Mon, 29 Jul 2024 01:40:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:40:37 +0000   Mon, 29 Jul 2024 01:40:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:40:37 +0000   Mon, 29 Jul 2024 01:40:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    multinode-060411-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ece6a9a1893a423297b8957b2ef449aa
	  System UUID:                ece6a9a1-893a-4232-97b8-957b2ef449aa
	  Boot ID:                    8d18ff78-b36f-4b74-a2de-623f48e7ac8f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2s6xq       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m45s
	  kube-system                 kube-proxy-w2ncl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m40s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m51s                  kube-proxy  
	  Normal  NodeHasNoDiskPressure    6m45s (x2 over 6m45s)  kubelet     Node multinode-060411-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m45s (x2 over 6m45s)  kubelet     Node multinode-060411-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m45s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m45s (x2 over 6m45s)  kubelet     Node multinode-060411-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                6m24s                  kubelet     Node multinode-060411-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  5m56s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m55s (x2 over 5m56s)  kubelet     Node multinode-060411-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m55s (x2 over 5m56s)  kubelet     Node multinode-060411-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m55s (x2 over 5m56s)  kubelet     Node multinode-060411-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m36s                  kubelet     Node multinode-060411-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-060411-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-060411-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-060411-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-060411-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.063269] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.171605] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.153875] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.291425] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.147932] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.446261] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.061607] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.002945] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.085650] kauditd_printk_skb: 69 callbacks suppressed
	[Jul29 01:32] systemd-fstab-generator[1476]: Ignoring "noauto" option for root device
	[  +0.131408] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.164879] kauditd_printk_skb: 56 callbacks suppressed
	[Jul29 01:33] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 01:38] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.141196] systemd-fstab-generator[2806]: Ignoring "noauto" option for root device
	[  +0.177963] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.143071] systemd-fstab-generator[2832]: Ignoring "noauto" option for root device
	[  +0.316504] systemd-fstab-generator[2860]: Ignoring "noauto" option for root device
	[  +8.008418] systemd-fstab-generator[2958]: Ignoring "noauto" option for root device
	[  +0.084645] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.580218] systemd-fstab-generator[3080]: Ignoring "noauto" option for root device
	[  +5.684080] kauditd_printk_skb: 74 callbacks suppressed
	[Jul29 01:39] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.173500] systemd-fstab-generator[3912]: Ignoring "noauto" option for root device
	[ +17.575057] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [299e426be4a5586af1a511622ce44a07ee065f9ac07dadee9a2d3975b4ceaeb9] <==
	{"level":"info","ts":"2024-07-29T01:38:53.578387Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T01:38:53.579677Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T01:38:53.579707Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T01:38:53.578683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac switched to configuration voters=(15657868212029965228)"}
	{"level":"info","ts":"2024-07-29T01:38:53.579893Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","added-peer-id":"d94bec2e0ded43ac","added-peer-peer-urls":["https://192.168.39.140:2380"]}
	{"level":"info","ts":"2024-07-29T01:38:53.580083Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:38:53.580159Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:38:53.581615Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d94bec2e0ded43ac","initial-advertise-peer-urls":["https://192.168.39.140:2380"],"listen-peer-urls":["https://192.168.39.140:2380"],"advertise-client-urls":["https://192.168.39.140:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.140:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T01:38:53.583063Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T01:38:53.57877Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-07-29T01:38:53.583253Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-07-29T01:38:55.256722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T01:38:55.256789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T01:38:55.256841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgPreVoteResp from d94bec2e0ded43ac at term 2"}
	{"level":"info","ts":"2024-07-29T01:38:55.256857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T01:38:55.256863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgVoteResp from d94bec2e0ded43ac at term 3"}
	{"level":"info","ts":"2024-07-29T01:38:55.256871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became leader at term 3"}
	{"level":"info","ts":"2024-07-29T01:38:55.256881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d94bec2e0ded43ac elected leader d94bec2e0ded43ac at term 3"}
	{"level":"info","ts":"2024-07-29T01:38:55.262067Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d94bec2e0ded43ac","local-member-attributes":"{Name:multinode-060411 ClientURLs:[https://192.168.39.140:2379]}","request-path":"/0/members/d94bec2e0ded43ac/attributes","cluster-id":"e5cf977c4e262fb4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T01:38:55.262113Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T01:38:55.262433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T01:38:55.262536Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T01:38:55.26256Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T01:38:55.264242Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T01:38:55.264242Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.140:2379"}
	
	
	==> etcd [bcee73846b860be33849e164a556e11985b5265784ca290324b297140699a1de] <==
	{"level":"info","ts":"2024-07-29T01:31:55.611475Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T01:31:55.613104Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T01:31:55.616474Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:31:55.616793Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:31:55.616846Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:31:55.619332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.140:2379"}
	{"level":"warn","ts":"2024-07-29T01:33:00.438557Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.222935ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4876431909037710045 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:43ac90fc1d929adc>","response":"size:42"}
	{"level":"info","ts":"2024-07-29T01:33:00.439049Z","caller":"traceutil/trace.go:171","msg":"trace[1746502634] linearizableReadLoop","detail":"{readStateIndex:516; appliedIndex:515; }","duration":"230.699008ms","start":"2024-07-29T01:33:00.208329Z","end":"2024-07-29T01:33:00.439028Z","steps":["trace[1746502634] 'read index received'  (duration: 83.809238ms)","trace[1746502634] 'applied index is now lower than readState.Index'  (duration: 146.888097ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T01:33:00.439376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.025656ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-060411-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-29T01:33:00.439424Z","caller":"traceutil/trace.go:171","msg":"trace[171258326] range","detail":"{range_begin:/registry/minions/multinode-060411-m02; range_end:; response_count:1; response_revision:497; }","duration":"231.10678ms","start":"2024-07-29T01:33:00.208306Z","end":"2024-07-29T01:33:00.439412Z","steps":["trace[171258326] 'agreement among raft nodes before linearized reading'  (duration: 231.026391ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T01:33:00.439068Z","caller":"traceutil/trace.go:171","msg":"trace[14796823] transaction","detail":"{read_only:false; response_revision:497; number_of_response:1; }","duration":"148.243876ms","start":"2024-07-29T01:33:00.290751Z","end":"2024-07-29T01:33:00.438995Z","steps":["trace[14796823] 'process raft request'  (duration: 148.114207ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T01:33:55.435097Z","caller":"traceutil/trace.go:171","msg":"trace[1096876734] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"215.645835ms","start":"2024-07-29T01:33:55.219405Z","end":"2024-07-29T01:33:55.43505Z","steps":["trace[1096876734] 'process raft request'  (duration: 133.658793ms)","trace[1096876734] 'compare'  (duration: 81.827838ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T01:33:55.435768Z","caller":"traceutil/trace.go:171","msg":"trace[663709683] transaction","detail":"{read_only:false; response_revision:632; number_of_response:1; }","duration":"184.745614ms","start":"2024-07-29T01:33:55.251012Z","end":"2024-07-29T01:33:55.435757Z","steps":["trace[663709683] 'process raft request'  (duration: 184.45659ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T01:34:02.604367Z","caller":"traceutil/trace.go:171","msg":"trace[1638847618] transaction","detail":"{read_only:false; response_revision:675; number_of_response:1; }","duration":"227.079893ms","start":"2024-07-29T01:34:02.377272Z","end":"2024-07-29T01:34:02.604352Z","steps":["trace[1638847618] 'process raft request'  (duration: 226.773508ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T01:37:10.512704Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T01:37:10.512877Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-060411","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.140:2380"],"advertise-client-urls":["https://192.168.39.140:2379"]}
	{"level":"warn","ts":"2024-07-29T01:37:10.513043Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T01:37:10.513132Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/07/29 01:37:10 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T01:37:10.602611Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.140:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T01:37:10.602652Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.140:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T01:37:10.602737Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d94bec2e0ded43ac","current-leader-member-id":"d94bec2e0ded43ac"}
	{"level":"info","ts":"2024-07-29T01:37:10.605885Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-07-29T01:37:10.606282Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-07-29T01:37:10.606325Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-060411","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.140:2380"],"advertise-client-urls":["https://192.168.39.140:2379"]}
	
	
	==> kernel <==
	 01:40:40 up 9 min,  0 users,  load average: 0.44, 0.25, 0.13
	Linux multinode-060411 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a309347431949a047c73522b7a2b599b2895342273fcdd47644ed42ce01a16b1] <==
	I0729 01:36:28.984751       1 main.go:322] Node multinode-060411-m03 has CIDR [10.244.3.0/24] 
	I0729 01:36:38.981826       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:36:38.981900       1 main.go:299] handling current node
	I0729 01:36:38.981918       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:36:38.981924       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:36:38.982159       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0729 01:36:38.982209       1 main.go:322] Node multinode-060411-m03 has CIDR [10.244.3.0/24] 
	I0729 01:36:48.984503       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:36:48.984627       1 main.go:299] handling current node
	I0729 01:36:48.984660       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:36:48.984679       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:36:48.984888       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0729 01:36:48.984918       1 main.go:322] Node multinode-060411-m03 has CIDR [10.244.3.0/24] 
	I0729 01:36:58.981829       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:36:58.982079       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:36:58.982243       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0729 01:36:58.982270       1 main.go:322] Node multinode-060411-m03 has CIDR [10.244.3.0/24] 
	I0729 01:36:58.982382       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:36:58.982452       1 main.go:299] handling current node
	I0729 01:37:08.984567       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:37:08.984634       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:37:08.984824       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0729 01:37:08.984846       1 main.go:322] Node multinode-060411-m03 has CIDR [10.244.3.0/24] 
	I0729 01:37:08.984911       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:37:08.984934       1 main.go:299] handling current node
	
	
	==> kindnet [ae351cb3c920c2080c10a102d162cdfc9004a93dfb6bb88c4d8fddf893b0d442] <==
	I0729 01:39:59.177224       1 main.go:322] Node multinode-060411-m03 has CIDR [10.244.3.0/24] 
	I0729 01:40:09.183829       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:40:09.183892       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:40:09.184115       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0729 01:40:09.184150       1 main.go:322] Node multinode-060411-m03 has CIDR [10.244.3.0/24] 
	I0729 01:40:09.184243       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:40:09.184309       1 main.go:299] handling current node
	I0729 01:40:19.176156       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:40:19.176342       1 main.go:299] handling current node
	I0729 01:40:19.176423       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:40:19.176537       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:40:19.176735       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0729 01:40:19.176773       1 main.go:322] Node multinode-060411-m03 has CIDR [10.244.2.0/24] 
	I0729 01:40:29.176125       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:40:29.176254       1 main.go:299] handling current node
	I0729 01:40:29.176281       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:40:29.176341       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:40:29.176622       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0729 01:40:29.176669       1 main.go:322] Node multinode-060411-m03 has CIDR [10.244.2.0/24] 
	I0729 01:40:39.176130       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:40:39.176274       1 main.go:299] handling current node
	I0729 01:40:39.176315       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:40:39.176350       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:40:39.176537       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0729 01:40:39.176583       1 main.go:322] Node multinode-060411-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [bff07df0b86935ecddcd897c463f867356e3b9b3c0e3efbd5c2569ca981c25b4] <==
	I0729 01:38:56.538580       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0729 01:38:56.538632       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0729 01:38:56.540125       1 aggregator.go:165] initial CRD sync complete...
	I0729 01:38:56.540170       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 01:38:56.540176       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 01:38:56.570683       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 01:38:56.571125       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 01:38:56.571157       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 01:38:56.575700       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 01:38:56.576561       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 01:38:56.577034       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 01:38:56.582787       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 01:38:56.636874       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 01:38:56.640285       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 01:38:56.640397       1 policy_source.go:224] refreshing policies
	I0729 01:38:56.641921       1 cache.go:39] Caches are synced for autoregister controller
	I0729 01:38:56.642283       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 01:38:57.482837       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 01:38:58.880838       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 01:38:59.075212       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 01:38:59.105938       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 01:38:59.206732       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 01:38:59.213753       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 01:39:09.794128       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 01:39:09.841839       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [c9289f8f5185e0bd4b1bf3bc7dc0c588f0eb55fb5ce88ba34069936ab9877ab8] <==
	I0729 01:37:10.533692       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0729 01:37:10.533742       1 controller.go:157] Shutting down quota evaluator
	I0729 01:37:10.533773       1 controller.go:176] quota evaluator worker shutdown
	W0729 01:37:10.535230       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0729 01:37:10.537473       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0729 01:37:10.537495       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 01:37:10.540225       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	E0729 01:37:10.541448       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.542114       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.542332       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.542561       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0729 01:37:10.542831       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0729 01:37:10.543265       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.543791       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.544183       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.544322       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.544447       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.544590       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.547687       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0729 01:37:10.548064       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0729 01:37:10.548234       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0729 01:37:10.549784       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:37:10.550444       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:37:10.550531       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:37:10.550601       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [a841bfa674c596af8d9a8081805f72f20e4bbd11af6323a38a626a429158f2b4] <==
	I0729 01:32:32.433489       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0729 01:33:00.446409       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-060411-m02\" does not exist"
	I0729 01:33:00.456735       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-060411-m02" podCIDRs=["10.244.1.0/24"]
	I0729 01:33:02.437650       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-060411-m02"
	I0729 01:33:20.863400       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:33:23.077276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.085395ms"
	I0729 01:33:23.116097       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.955172ms"
	I0729 01:33:23.116315       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.264µs"
	I0729 01:33:26.404458       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.115837ms"
	I0729 01:33:26.404605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.802µs"
	I0729 01:33:26.873818       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.126979ms"
	I0729 01:33:26.874205       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="138.343µs"
	I0729 01:33:55.438623       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:33:55.439522       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-060411-m03\" does not exist"
	I0729 01:33:55.456733       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-060411-m03" podCIDRs=["10.244.2.0/24"]
	I0729 01:33:57.456226       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-060411-m03"
	I0729 01:34:16.029088       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:34:43.998612       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:34:45.065561       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-060411-m03\" does not exist"
	I0729 01:34:45.066586       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:34:45.096333       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-060411-m03" podCIDRs=["10.244.3.0/24"]
	I0729 01:35:04.777304       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:35:47.516543       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:35:47.581431       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.42228ms"
	I0729 01:35:47.581880       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.174µs"
	
	
	==> kube-controller-manager [e73231afd5e9252c0fa26b35dd94ded018f615b311998c54a013e637864ee0fb] <==
	I0729 01:39:10.428193       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 01:39:10.451174       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 01:39:34.364280       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.678501ms"
	I0729 01:39:34.384210       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.802643ms"
	I0729 01:39:34.398743       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.440069ms"
	I0729 01:39:34.399055       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.777µs"
	I0729 01:39:35.538654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.599µs"
	I0729 01:39:38.527751       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-060411-m02\" does not exist"
	I0729 01:39:38.542202       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-060411-m02" podCIDRs=["10.244.1.0/24"]
	I0729 01:39:40.413376       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.829µs"
	I0729 01:39:40.441625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.242µs"
	I0729 01:39:40.453716       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.565µs"
	I0729 01:39:40.485762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.889µs"
	I0729 01:39:40.493589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.463µs"
	I0729 01:39:40.496751       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.997µs"
	I0729 01:39:58.395424       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:39:58.417116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="159.228µs"
	I0729 01:39:58.433238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.477µs"
	I0729 01:40:02.804772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.472325ms"
	I0729 01:40:02.805231       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.554µs"
	I0729 01:40:16.574522       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:40:17.843731       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:40:17.844707       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-060411-m03\" does not exist"
	I0729 01:40:17.856277       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-060411-m03" podCIDRs=["10.244.2.0/24"]
	I0729 01:40:37.399536       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m03"
	
	
	==> kube-proxy [51ce34270fcd5154c5d87a9dab0259ba588d62fdb1cf925d42a21c2892c06846] <==
	I0729 01:38:58.382212       1 server_linux.go:69] "Using iptables proxy"
	I0729 01:38:58.409703       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.140"]
	I0729 01:38:58.486806       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 01:38:58.486884       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 01:38:58.486900       1 server_linux.go:165] "Using iptables Proxier"
	I0729 01:38:58.491578       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 01:38:58.491738       1 server.go:872] "Version info" version="v1.30.3"
	I0729 01:38:58.491751       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:38:58.493570       1 config.go:192] "Starting service config controller"
	I0729 01:38:58.493592       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 01:38:58.493622       1 config.go:101] "Starting endpoint slice config controller"
	I0729 01:38:58.493628       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 01:38:58.494390       1 config.go:319] "Starting node config controller"
	I0729 01:38:58.494399       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 01:38:58.595110       1 shared_informer.go:320] Caches are synced for node config
	I0729 01:38:58.595141       1 shared_informer.go:320] Caches are synced for service config
	I0729 01:38:58.595151       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ef2d721b7d276811ff1da16e5522d610d1f22e22ca78fa5d2ce7b1b803ff655e] <==
	I0729 01:32:14.499053       1 server_linux.go:69] "Using iptables proxy"
	I0729 01:32:14.512339       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.140"]
	I0729 01:32:14.548888       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 01:32:14.548920       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 01:32:14.548936       1 server_linux.go:165] "Using iptables Proxier"
	I0729 01:32:14.552463       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 01:32:14.552744       1 server.go:872] "Version info" version="v1.30.3"
	I0729 01:32:14.552792       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:32:14.554538       1 config.go:192] "Starting service config controller"
	I0729 01:32:14.554890       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 01:32:14.555029       1 config.go:101] "Starting endpoint slice config controller"
	I0729 01:32:14.555056       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 01:32:14.555847       1 config.go:319] "Starting node config controller"
	I0729 01:32:14.557487       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 01:32:14.655924       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 01:32:14.656042       1 shared_informer.go:320] Caches are synced for service config
	I0729 01:32:14.657609       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6add5858278eb1e54d49eb8e86219474ef87996825592ed5e50f1f39ad079277] <==
	I0729 01:38:54.358752       1 serving.go:380] Generated self-signed cert in-memory
	W0729 01:38:56.528666       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 01:38:56.528818       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 01:38:56.528850       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 01:38:56.528922       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 01:38:56.560631       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 01:38:56.560669       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:38:56.565608       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 01:38:56.565705       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 01:38:56.565732       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 01:38:56.565745       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 01:38:56.666379       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ded106b4ad30b0a5a3e4f673f331c2c718da977fbbab17cf9305ea41e88a02fd] <==
	E0729 01:31:57.162503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 01:31:57.162605       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 01:31:57.162642       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 01:31:57.996608       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 01:31:57.996735       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 01:31:58.038442       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 01:31:58.038470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 01:31:58.182230       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 01:31:58.182417       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 01:31:58.184823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 01:31:58.184894       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 01:31:58.195169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 01:31:58.195209       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 01:31:58.198160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 01:31:58.198280       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 01:31:58.286119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 01:31:58.286218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 01:31:58.312177       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 01:31:58.312229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 01:31:58.399698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 01:31:58.399819       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 01:31:58.529728       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 01:31:58.529822       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 01:32:01.147520       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 01:37:10.507217       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 01:38:53 multinode-060411 kubelet[3087]: I0729 01:38:53.908909    3087 kubelet_node_status.go:73] "Attempting to register node" node="multinode-060411"
	Jul 29 01:38:56 multinode-060411 kubelet[3087]: I0729 01:38:56.695639    3087 kubelet_node_status.go:112] "Node was previously registered" node="multinode-060411"
	Jul 29 01:38:56 multinode-060411 kubelet[3087]: I0729 01:38:56.695753    3087 kubelet_node_status.go:76] "Successfully registered node" node="multinode-060411"
	Jul 29 01:38:56 multinode-060411 kubelet[3087]: I0729 01:38:56.697726    3087 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 01:38:56 multinode-060411 kubelet[3087]: I0729 01:38:56.698921    3087 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: E0729 01:38:57.036083    3087 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-060411\" already exists" pod="kube-system/kube-controller-manager-multinode-060411"
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: I0729 01:38:57.377023    3087 apiserver.go:52] "Watching apiserver"
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: I0729 01:38:57.380410    3087 topology_manager.go:215] "Topology Admit Handler" podUID="18fe5ccd-46a0-4197-a687-af0fca1f518d" podNamespace="kube-system" podName="kube-proxy-k7k6j"
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: I0729 01:38:57.380564    3087 topology_manager.go:215] "Topology Admit Handler" podUID="6fd59518-57af-4a69-8697-f7fbb6a51b5e" podNamespace="kube-system" podName="kindnet-8csbb"
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: I0729 01:38:57.380618    3087 topology_manager.go:215] "Topology Admit Handler" podUID="6e2f01ff-afc1-464e-a3f2-e7b7d11203ad" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mnz72"
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: I0729 01:38:57.380689    3087 topology_manager.go:215] "Topology Admit Handler" podUID="83dec14c-5f93-4dee-bb62-52cae06307f7" podNamespace="kube-system" podName="storage-provisioner"
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: I0729 01:38:57.380766    3087 topology_manager.go:215] "Topology Admit Handler" podUID="11b7cc27-3dde-47b9-afd8-649382e4ad37" podNamespace="default" podName="busybox-fc5497c4f-lfmwp"
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: I0729 01:38:57.391302    3087 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: I0729 01:38:57.429918    3087 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fd59518-57af-4a69-8697-f7fbb6a51b5e-lib-modules\") pod \"kindnet-8csbb\" (UID: \"6fd59518-57af-4a69-8697-f7fbb6a51b5e\") " pod="kube-system/kindnet-8csbb"
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: I0729 01:38:57.430038    3087 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18fe5ccd-46a0-4197-a687-af0fca1f518d-xtables-lock\") pod \"kube-proxy-k7k6j\" (UID: \"18fe5ccd-46a0-4197-a687-af0fca1f518d\") " pod="kube-system/kube-proxy-k7k6j"
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: I0729 01:38:57.430061    3087 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18fe5ccd-46a0-4197-a687-af0fca1f518d-lib-modules\") pod \"kube-proxy-k7k6j\" (UID: \"18fe5ccd-46a0-4197-a687-af0fca1f518d\") " pod="kube-system/kube-proxy-k7k6j"
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: I0729 01:38:57.430081    3087 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fd59518-57af-4a69-8697-f7fbb6a51b5e-xtables-lock\") pod \"kindnet-8csbb\" (UID: \"6fd59518-57af-4a69-8697-f7fbb6a51b5e\") " pod="kube-system/kindnet-8csbb"
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: I0729 01:38:57.430095    3087 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/83dec14c-5f93-4dee-bb62-52cae06307f7-tmp\") pod \"storage-provisioner\" (UID: \"83dec14c-5f93-4dee-bb62-52cae06307f7\") " pod="kube-system/storage-provisioner"
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: I0729 01:38:57.430119    3087 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6fd59518-57af-4a69-8697-f7fbb6a51b5e-cni-cfg\") pod \"kindnet-8csbb\" (UID: \"6fd59518-57af-4a69-8697-f7fbb6a51b5e\") " pod="kube-system/kindnet-8csbb"
	Jul 29 01:39:07 multinode-060411 kubelet[3087]: I0729 01:39:07.381434    3087 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 29 01:39:52 multinode-060411 kubelet[3087]: E0729 01:39:52.444732    3087 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:39:52 multinode-060411 kubelet[3087]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:39:52 multinode-060411 kubelet[3087]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:39:52 multinode-060411 kubelet[3087]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:39:52 multinode-060411 kubelet[3087]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 01:40:39.908300   46992 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-9421/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-060411 -n multinode-060411
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-060411 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (334.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 stop
E0729 01:41:27.216518   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 01:42:23.073335   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-060411 stop: exit status 82 (2m0.467476628s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-060411-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-060411 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-060411 status: exit status 3 (18.701886169s)

                                                
                                                
-- stdout --
	multinode-060411
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-060411-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 01:43:03.263393   47653 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host
	E0729 01:43:03.263424   47653 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.233:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-060411 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-060411 -n multinode-060411
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-060411 logs -n 25: (1.49435964s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-060411 ssh -n                                                                 | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-060411 cp multinode-060411-m02:/home/docker/cp-test.txt                       | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411:/home/docker/cp-test_multinode-060411-m02_multinode-060411.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n                                                                 | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n multinode-060411 sudo cat                                       | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | /home/docker/cp-test_multinode-060411-m02_multinode-060411.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-060411 cp multinode-060411-m02:/home/docker/cp-test.txt                       | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m03:/home/docker/cp-test_multinode-060411-m02_multinode-060411-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n                                                                 | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n multinode-060411-m03 sudo cat                                   | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | /home/docker/cp-test_multinode-060411-m02_multinode-060411-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-060411 cp testdata/cp-test.txt                                                | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n                                                                 | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-060411 cp multinode-060411-m03:/home/docker/cp-test.txt                       | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile705326141/001/cp-test_multinode-060411-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n                                                                 | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-060411 cp multinode-060411-m03:/home/docker/cp-test.txt                       | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411:/home/docker/cp-test_multinode-060411-m03_multinode-060411.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n                                                                 | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n multinode-060411 sudo cat                                       | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | /home/docker/cp-test_multinode-060411-m03_multinode-060411.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-060411 cp multinode-060411-m03:/home/docker/cp-test.txt                       | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m02:/home/docker/cp-test_multinode-060411-m03_multinode-060411-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n                                                                 | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n multinode-060411-m02 sudo cat                                   | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | /home/docker/cp-test_multinode-060411-m03_multinode-060411-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-060411 node stop m03                                                          | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	| node    | multinode-060411 node start                                                             | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:35 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-060411                                                                | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:35 UTC |                     |
	| stop    | -p multinode-060411                                                                     | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:35 UTC |                     |
	| start   | -p multinode-060411                                                                     | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:37 UTC | 29 Jul 24 01:40 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-060411                                                                | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:40 UTC |                     |
	| node    | multinode-060411 node delete                                                            | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:40 UTC | 29 Jul 24 01:40 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-060411 stop                                                                   | multinode-060411 | jenkins | v1.33.1 | 29 Jul 24 01:40 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 01:37:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 01:37:09.455841   45889 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:37:09.455977   45889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:37:09.455990   45889 out.go:304] Setting ErrFile to fd 2...
	I0729 01:37:09.455996   45889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:37:09.456188   45889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:37:09.456750   45889 out.go:298] Setting JSON to false
	I0729 01:37:09.457704   45889 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4775,"bootTime":1722212254,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 01:37:09.457760   45889 start.go:139] virtualization: kvm guest
	I0729 01:37:09.460229   45889 out.go:177] * [multinode-060411] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 01:37:09.461579   45889 notify.go:220] Checking for updates...
	I0729 01:37:09.461629   45889 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 01:37:09.463021   45889 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 01:37:09.464587   45889 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:37:09.466004   45889 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:37:09.467358   45889 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 01:37:09.468611   45889 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 01:37:09.470510   45889 config.go:182] Loaded profile config "multinode-060411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:37:09.470606   45889 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 01:37:09.471026   45889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:37:09.471121   45889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:37:09.487190   45889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I0729 01:37:09.487588   45889 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:37:09.488118   45889 main.go:141] libmachine: Using API Version  1
	I0729 01:37:09.488148   45889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:37:09.488487   45889 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:37:09.488675   45889 main.go:141] libmachine: (multinode-060411) Calling .DriverName
	I0729 01:37:09.524968   45889 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 01:37:09.526329   45889 start.go:297] selected driver: kvm2
	I0729 01:37:09.526341   45889 start.go:901] validating driver "kvm2" against &{Name:multinode-060411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-060411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.190 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:37:09.526480   45889 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 01:37:09.526815   45889 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:37:09.526881   45889 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 01:37:09.542086   45889 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 01:37:09.542748   45889 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 01:37:09.542823   45889 cni.go:84] Creating CNI manager for ""
	I0729 01:37:09.542837   45889 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 01:37:09.542913   45889 start.go:340] cluster config:
	{Name:multinode-060411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-060411 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.190 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:37:09.543095   45889 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:37:09.544981   45889 out.go:177] * Starting "multinode-060411" primary control-plane node in "multinode-060411" cluster
	I0729 01:37:09.546322   45889 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:37:09.546358   45889 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 01:37:09.546365   45889 cache.go:56] Caching tarball of preloaded images
	I0729 01:37:09.546438   45889 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 01:37:09.546447   45889 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 01:37:09.546559   45889 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/config.json ...
	I0729 01:37:09.546744   45889 start.go:360] acquireMachinesLock for multinode-060411: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 01:37:09.546785   45889 start.go:364] duration metric: took 23.53µs to acquireMachinesLock for "multinode-060411"
	I0729 01:37:09.546796   45889 start.go:96] Skipping create...Using existing machine configuration
	I0729 01:37:09.546801   45889 fix.go:54] fixHost starting: 
	I0729 01:37:09.547043   45889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:37:09.547099   45889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:37:09.561568   45889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38557
	I0729 01:37:09.562011   45889 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:37:09.562527   45889 main.go:141] libmachine: Using API Version  1
	I0729 01:37:09.562550   45889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:37:09.562891   45889 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:37:09.563118   45889 main.go:141] libmachine: (multinode-060411) Calling .DriverName
	I0729 01:37:09.563322   45889 main.go:141] libmachine: (multinode-060411) Calling .GetState
	I0729 01:37:09.564986   45889 fix.go:112] recreateIfNeeded on multinode-060411: state=Running err=<nil>
	W0729 01:37:09.565011   45889 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 01:37:09.567008   45889 out.go:177] * Updating the running kvm2 "multinode-060411" VM ...
	I0729 01:37:09.568492   45889 machine.go:94] provisionDockerMachine start ...
	I0729 01:37:09.568518   45889 main.go:141] libmachine: (multinode-060411) Calling .DriverName
	I0729 01:37:09.568758   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:37:09.571515   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.571915   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:37:09.571936   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.572080   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:37:09.572246   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:09.572387   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:09.572552   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:37:09.572705   45889 main.go:141] libmachine: Using SSH client type: native
	I0729 01:37:09.572946   45889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 01:37:09.572962   45889 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 01:37:09.680701   45889 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-060411
	
	I0729 01:37:09.680735   45889 main.go:141] libmachine: (multinode-060411) Calling .GetMachineName
	I0729 01:37:09.680996   45889 buildroot.go:166] provisioning hostname "multinode-060411"
	I0729 01:37:09.681028   45889 main.go:141] libmachine: (multinode-060411) Calling .GetMachineName
	I0729 01:37:09.681211   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:37:09.683887   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.684218   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:37:09.684244   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.684399   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:37:09.684590   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:09.684737   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:09.684901   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:37:09.685057   45889 main.go:141] libmachine: Using SSH client type: native
	I0729 01:37:09.685250   45889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 01:37:09.685267   45889 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-060411 && echo "multinode-060411" | sudo tee /etc/hostname
	I0729 01:37:09.810051   45889 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-060411
	
	I0729 01:37:09.810081   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:37:09.813215   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.813652   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:37:09.813683   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.813828   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:37:09.814012   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:09.814180   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:09.814305   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:37:09.814455   45889 main.go:141] libmachine: Using SSH client type: native
	I0729 01:37:09.814615   45889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 01:37:09.814632   45889 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-060411' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-060411/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-060411' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 01:37:09.915940   45889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:37:09.915972   45889 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 01:37:09.916057   45889 buildroot.go:174] setting up certificates
	I0729 01:37:09.916067   45889 provision.go:84] configureAuth start
	I0729 01:37:09.916078   45889 main.go:141] libmachine: (multinode-060411) Calling .GetMachineName
	I0729 01:37:09.916330   45889 main.go:141] libmachine: (multinode-060411) Calling .GetIP
	I0729 01:37:09.919112   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.919481   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:37:09.919511   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.919629   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:37:09.921890   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.922322   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:37:09.922362   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:09.922510   45889 provision.go:143] copyHostCerts
	I0729 01:37:09.922540   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:37:09.922581   45889 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 01:37:09.922596   45889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:37:09.922680   45889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 01:37:09.922776   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:37:09.922800   45889 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 01:37:09.922807   45889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:37:09.922835   45889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 01:37:09.922920   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:37:09.922939   45889 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 01:37:09.922945   45889 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:37:09.922967   45889 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 01:37:09.923012   45889 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.multinode-060411 san=[127.0.0.1 192.168.39.140 localhost minikube multinode-060411]
	I0729 01:37:10.227610   45889 provision.go:177] copyRemoteCerts
	I0729 01:37:10.227665   45889 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 01:37:10.227688   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:37:10.230757   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:10.231165   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:37:10.231192   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:10.231374   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:37:10.231578   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:10.231716   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:37:10.231813   45889 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/multinode-060411/id_rsa Username:docker}
	I0729 01:37:10.314357   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 01:37:10.314445   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 01:37:10.339500   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 01:37:10.339574   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 01:37:10.366644   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 01:37:10.366769   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 01:37:10.393783   45889 provision.go:87] duration metric: took 477.703082ms to configureAuth
	I0729 01:37:10.393813   45889 buildroot.go:189] setting minikube options for container-runtime
	I0729 01:37:10.394040   45889 config.go:182] Loaded profile config "multinode-060411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:37:10.394112   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:37:10.397088   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:10.397539   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:37:10.397572   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:37:10.397752   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:37:10.397919   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:10.398068   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:37:10.398182   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:37:10.398336   45889 main.go:141] libmachine: Using SSH client type: native
	I0729 01:37:10.398498   45889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 01:37:10.398514   45889 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 01:38:41.113529   45889 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 01:38:41.113557   45889 machine.go:97] duration metric: took 1m31.545048794s to provisionDockerMachine
	I0729 01:38:41.113568   45889 start.go:293] postStartSetup for "multinode-060411" (driver="kvm2")
	I0729 01:38:41.113578   45889 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 01:38:41.113593   45889 main.go:141] libmachine: (multinode-060411) Calling .DriverName
	I0729 01:38:41.113896   45889 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 01:38:41.113930   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:38:41.117048   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.117575   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:38:41.117604   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.117756   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:38:41.117978   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:38:41.118168   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:38:41.118299   45889 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/multinode-060411/id_rsa Username:docker}
	I0729 01:38:41.198468   45889 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 01:38:41.203071   45889 command_runner.go:130] > NAME=Buildroot
	I0729 01:38:41.203102   45889 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 01:38:41.203110   45889 command_runner.go:130] > ID=buildroot
	I0729 01:38:41.203117   45889 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 01:38:41.203125   45889 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 01:38:41.203163   45889 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 01:38:41.203179   45889 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 01:38:41.203238   45889 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 01:38:41.203315   45889 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 01:38:41.203327   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /etc/ssl/certs/166232.pem
	I0729 01:38:41.203410   45889 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 01:38:41.213174   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:38:41.238596   45889 start.go:296] duration metric: took 125.014807ms for postStartSetup
	I0729 01:38:41.238637   45889 fix.go:56] duration metric: took 1m31.691836489s for fixHost
	I0729 01:38:41.238656   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:38:41.241479   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.241940   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:38:41.241986   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.242178   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:38:41.242385   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:38:41.242620   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:38:41.242750   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:38:41.242904   45889 main.go:141] libmachine: Using SSH client type: native
	I0729 01:38:41.243073   45889 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0729 01:38:41.243085   45889 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 01:38:41.343764   45889 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722217121.314532067
	
	I0729 01:38:41.343789   45889 fix.go:216] guest clock: 1722217121.314532067
	I0729 01:38:41.343797   45889 fix.go:229] Guest: 2024-07-29 01:38:41.314532067 +0000 UTC Remote: 2024-07-29 01:38:41.238641824 +0000 UTC m=+91.817243193 (delta=75.890243ms)
	I0729 01:38:41.343815   45889 fix.go:200] guest clock delta is within tolerance: 75.890243ms
	I0729 01:38:41.343820   45889 start.go:83] releasing machines lock for "multinode-060411", held for 1m31.797028617s
	I0729 01:38:41.343837   45889 main.go:141] libmachine: (multinode-060411) Calling .DriverName
	I0729 01:38:41.344084   45889 main.go:141] libmachine: (multinode-060411) Calling .GetIP
	I0729 01:38:41.346830   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.347273   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:38:41.347296   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.347519   45889 main.go:141] libmachine: (multinode-060411) Calling .DriverName
	I0729 01:38:41.348069   45889 main.go:141] libmachine: (multinode-060411) Calling .DriverName
	I0729 01:38:41.348241   45889 main.go:141] libmachine: (multinode-060411) Calling .DriverName
	I0729 01:38:41.348325   45889 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 01:38:41.348375   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:38:41.348473   45889 ssh_runner.go:195] Run: cat /version.json
	I0729 01:38:41.348492   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:38:41.351261   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.351596   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:38:41.351628   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.351656   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.351806   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:38:41.351983   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:38:41.352054   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:38:41.352077   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:41.352116   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:38:41.352273   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:38:41.352271   45889 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/multinode-060411/id_rsa Username:docker}
	I0729 01:38:41.352404   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:38:41.352568   45889 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:38:41.352708   45889 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/multinode-060411/id_rsa Username:docker}
	I0729 01:38:41.445766   45889 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 01:38:41.445805   45889 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0729 01:38:41.445955   45889 ssh_runner.go:195] Run: systemctl --version
	I0729 01:38:41.452175   45889 command_runner.go:130] > systemd 252 (252)
	I0729 01:38:41.452217   45889 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 01:38:41.452285   45889 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 01:38:41.613304   45889 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 01:38:41.620128   45889 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 01:38:41.620444   45889 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 01:38:41.620510   45889 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 01:38:41.629856   45889 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 01:38:41.629879   45889 start.go:495] detecting cgroup driver to use...
	I0729 01:38:41.629934   45889 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 01:38:41.645649   45889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 01:38:41.660060   45889 docker.go:217] disabling cri-docker service (if available) ...
	I0729 01:38:41.660114   45889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 01:38:41.673820   45889 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 01:38:41.687348   45889 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 01:38:41.833287   45889 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 01:38:41.979000   45889 docker.go:233] disabling docker service ...
	I0729 01:38:41.979085   45889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 01:38:41.996524   45889 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 01:38:42.010071   45889 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 01:38:42.153527   45889 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 01:38:42.294213   45889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 01:38:42.308590   45889 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 01:38:42.327898   45889 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 01:38:42.327943   45889 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 01:38:42.327999   45889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:38:42.339018   45889 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 01:38:42.339116   45889 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:38:42.350345   45889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:38:42.361804   45889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:38:42.373148   45889 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 01:38:42.384588   45889 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:38:42.397188   45889 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:38:42.408389   45889 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:38:42.420377   45889 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 01:38:42.431699   45889 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 01:38:42.431760   45889 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 01:38:42.443000   45889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:38:42.616768   45889 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 01:38:50.152688   45889 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.535884066s)
	I0729 01:38:50.152720   45889 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 01:38:50.152777   45889 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 01:38:50.157741   45889 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 01:38:50.157771   45889 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 01:38:50.157785   45889 command_runner.go:130] > Device: 0,22	Inode: 1331        Links: 1
	I0729 01:38:50.157795   45889 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 01:38:50.157803   45889 command_runner.go:130] > Access: 2024-07-29 01:38:50.013635142 +0000
	I0729 01:38:50.157811   45889 command_runner.go:130] > Modify: 2024-07-29 01:38:50.013635142 +0000
	I0729 01:38:50.157819   45889 command_runner.go:130] > Change: 2024-07-29 01:38:50.013635142 +0000
	I0729 01:38:50.157824   45889 command_runner.go:130] >  Birth: -
	I0729 01:38:50.157885   45889 start.go:563] Will wait 60s for crictl version
	I0729 01:38:50.157942   45889 ssh_runner.go:195] Run: which crictl
	I0729 01:38:50.162268   45889 command_runner.go:130] > /usr/bin/crictl
	I0729 01:38:50.162342   45889 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 01:38:50.200730   45889 command_runner.go:130] > Version:  0.1.0
	I0729 01:38:50.200752   45889 command_runner.go:130] > RuntimeName:  cri-o
	I0729 01:38:50.200759   45889 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 01:38:50.200764   45889 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 01:38:50.202828   45889 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 01:38:50.202892   45889 ssh_runner.go:195] Run: crio --version
	I0729 01:38:50.230166   45889 command_runner.go:130] > crio version 1.29.1
	I0729 01:38:50.230192   45889 command_runner.go:130] > Version:        1.29.1
	I0729 01:38:50.230237   45889 command_runner.go:130] > GitCommit:      unknown
	I0729 01:38:50.230246   45889 command_runner.go:130] > GitCommitDate:  unknown
	I0729 01:38:50.230253   45889 command_runner.go:130] > GitTreeState:   clean
	I0729 01:38:50.230265   45889 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 01:38:50.230279   45889 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 01:38:50.230288   45889 command_runner.go:130] > Compiler:       gc
	I0729 01:38:50.230298   45889 command_runner.go:130] > Platform:       linux/amd64
	I0729 01:38:50.230304   45889 command_runner.go:130] > Linkmode:       dynamic
	I0729 01:38:50.230315   45889 command_runner.go:130] > BuildTags:      
	I0729 01:38:50.230322   45889 command_runner.go:130] >   containers_image_ostree_stub
	I0729 01:38:50.230329   45889 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 01:38:50.230336   45889 command_runner.go:130] >   btrfs_noversion
	I0729 01:38:50.230344   45889 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 01:38:50.230351   45889 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 01:38:50.230359   45889 command_runner.go:130] >   seccomp
	I0729 01:38:50.230365   45889 command_runner.go:130] > LDFlags:          unknown
	I0729 01:38:50.230372   45889 command_runner.go:130] > SeccompEnabled:   true
	I0729 01:38:50.230379   45889 command_runner.go:130] > AppArmorEnabled:  false
	I0729 01:38:50.231530   45889 ssh_runner.go:195] Run: crio --version
	I0729 01:38:50.262075   45889 command_runner.go:130] > crio version 1.29.1
	I0729 01:38:50.262102   45889 command_runner.go:130] > Version:        1.29.1
	I0729 01:38:50.262109   45889 command_runner.go:130] > GitCommit:      unknown
	I0729 01:38:50.262113   45889 command_runner.go:130] > GitCommitDate:  unknown
	I0729 01:38:50.262118   45889 command_runner.go:130] > GitTreeState:   clean
	I0729 01:38:50.262124   45889 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 01:38:50.262129   45889 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 01:38:50.262133   45889 command_runner.go:130] > Compiler:       gc
	I0729 01:38:50.262137   45889 command_runner.go:130] > Platform:       linux/amd64
	I0729 01:38:50.262141   45889 command_runner.go:130] > Linkmode:       dynamic
	I0729 01:38:50.262148   45889 command_runner.go:130] > BuildTags:      
	I0729 01:38:50.262152   45889 command_runner.go:130] >   containers_image_ostree_stub
	I0729 01:38:50.262156   45889 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 01:38:50.262159   45889 command_runner.go:130] >   btrfs_noversion
	I0729 01:38:50.262164   45889 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 01:38:50.262168   45889 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 01:38:50.262172   45889 command_runner.go:130] >   seccomp
	I0729 01:38:50.262177   45889 command_runner.go:130] > LDFlags:          unknown
	I0729 01:38:50.262184   45889 command_runner.go:130] > SeccompEnabled:   true
	I0729 01:38:50.262188   45889 command_runner.go:130] > AppArmorEnabled:  false
	I0729 01:38:50.264001   45889 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 01:38:50.265381   45889 main.go:141] libmachine: (multinode-060411) Calling .GetIP
	I0729 01:38:50.268240   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:50.268640   45889 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:38:50.268663   45889 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:38:50.268816   45889 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 01:38:50.273037   45889 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 01:38:50.273127   45889 kubeadm.go:883] updating cluster {Name:multinode-060411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-060411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.190 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 01:38:50.273300   45889 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:38:50.273356   45889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:38:50.317638   45889 command_runner.go:130] > {
	I0729 01:38:50.317656   45889 command_runner.go:130] >   "images": [
	I0729 01:38:50.317660   45889 command_runner.go:130] >     {
	I0729 01:38:50.317668   45889 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 01:38:50.317673   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.317678   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 01:38:50.317682   45889 command_runner.go:130] >       ],
	I0729 01:38:50.317686   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.317694   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 01:38:50.317700   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 01:38:50.317704   45889 command_runner.go:130] >       ],
	I0729 01:38:50.317710   45889 command_runner.go:130] >       "size": "87165492",
	I0729 01:38:50.317716   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.317721   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.317729   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.317738   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.317742   45889 command_runner.go:130] >     },
	I0729 01:38:50.317748   45889 command_runner.go:130] >     {
	I0729 01:38:50.317760   45889 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 01:38:50.317764   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.317769   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 01:38:50.317775   45889 command_runner.go:130] >       ],
	I0729 01:38:50.317779   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.317788   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 01:38:50.317796   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 01:38:50.317802   45889 command_runner.go:130] >       ],
	I0729 01:38:50.317806   45889 command_runner.go:130] >       "size": "87174707",
	I0729 01:38:50.317812   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.317822   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.317833   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.317843   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.317851   45889 command_runner.go:130] >     },
	I0729 01:38:50.317858   45889 command_runner.go:130] >     {
	I0729 01:38:50.317865   45889 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 01:38:50.317871   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.317876   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 01:38:50.317882   45889 command_runner.go:130] >       ],
	I0729 01:38:50.317886   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.317895   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 01:38:50.317906   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 01:38:50.317915   45889 command_runner.go:130] >       ],
	I0729 01:38:50.317929   45889 command_runner.go:130] >       "size": "1363676",
	I0729 01:38:50.317939   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.317948   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.317958   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.317965   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.317969   45889 command_runner.go:130] >     },
	I0729 01:38:50.317975   45889 command_runner.go:130] >     {
	I0729 01:38:50.317981   45889 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 01:38:50.317987   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.317992   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 01:38:50.317998   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318002   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.318016   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 01:38:50.318035   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 01:38:50.318044   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318053   45889 command_runner.go:130] >       "size": "31470524",
	I0729 01:38:50.318062   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.318072   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.318082   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.318089   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.318093   45889 command_runner.go:130] >     },
	I0729 01:38:50.318107   45889 command_runner.go:130] >     {
	I0729 01:38:50.318119   45889 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 01:38:50.318129   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.318138   45889 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 01:38:50.318147   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318157   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.318172   45889 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 01:38:50.318186   45889 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 01:38:50.318194   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318200   45889 command_runner.go:130] >       "size": "61245718",
	I0729 01:38:50.318206   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.318212   45889 command_runner.go:130] >       "username": "nonroot",
	I0729 01:38:50.318223   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.318232   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.318238   45889 command_runner.go:130] >     },
	I0729 01:38:50.318243   45889 command_runner.go:130] >     {
	I0729 01:38:50.318254   45889 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 01:38:50.318264   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.318274   45889 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 01:38:50.318281   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318287   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.318297   45889 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 01:38:50.318312   45889 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 01:38:50.318320   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318327   45889 command_runner.go:130] >       "size": "150779692",
	I0729 01:38:50.318336   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.318346   45889 command_runner.go:130] >         "value": "0"
	I0729 01:38:50.318354   45889 command_runner.go:130] >       },
	I0729 01:38:50.318361   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.318370   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.318377   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.318385   45889 command_runner.go:130] >     },
	I0729 01:38:50.318390   45889 command_runner.go:130] >     {
	I0729 01:38:50.318416   45889 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 01:38:50.318430   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.318438   45889 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 01:38:50.318443   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318454   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.318469   45889 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 01:38:50.318484   45889 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 01:38:50.318494   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318503   45889 command_runner.go:130] >       "size": "117609954",
	I0729 01:38:50.318510   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.318515   45889 command_runner.go:130] >         "value": "0"
	I0729 01:38:50.318523   45889 command_runner.go:130] >       },
	I0729 01:38:50.318532   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.318541   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.318551   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.318560   45889 command_runner.go:130] >     },
	I0729 01:38:50.318565   45889 command_runner.go:130] >     {
	I0729 01:38:50.318578   45889 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 01:38:50.318587   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.318594   45889 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 01:38:50.318599   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318604   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.318649   45889 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 01:38:50.318666   45889 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 01:38:50.318672   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318679   45889 command_runner.go:130] >       "size": "112198984",
	I0729 01:38:50.318689   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.318696   45889 command_runner.go:130] >         "value": "0"
	I0729 01:38:50.318704   45889 command_runner.go:130] >       },
	I0729 01:38:50.318711   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.318717   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.318724   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.318729   45889 command_runner.go:130] >     },
	I0729 01:38:50.318733   45889 command_runner.go:130] >     {
	I0729 01:38:50.318742   45889 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 01:38:50.318748   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.318756   45889 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 01:38:50.318761   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318768   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.318783   45889 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 01:38:50.318794   45889 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 01:38:50.318799   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318807   45889 command_runner.go:130] >       "size": "85953945",
	I0729 01:38:50.318813   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.318819   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.318825   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.318830   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.318833   45889 command_runner.go:130] >     },
	I0729 01:38:50.318836   45889 command_runner.go:130] >     {
	I0729 01:38:50.318846   45889 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 01:38:50.318855   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.318863   45889 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 01:38:50.318871   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318878   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.318890   45889 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 01:38:50.318905   45889 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 01:38:50.318914   45889 command_runner.go:130] >       ],
	I0729 01:38:50.318921   45889 command_runner.go:130] >       "size": "63051080",
	I0729 01:38:50.318932   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.318938   45889 command_runner.go:130] >         "value": "0"
	I0729 01:38:50.318944   45889 command_runner.go:130] >       },
	I0729 01:38:50.318950   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.318956   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.318965   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.318971   45889 command_runner.go:130] >     },
	I0729 01:38:50.318980   45889 command_runner.go:130] >     {
	I0729 01:38:50.318990   45889 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 01:38:50.318999   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.319008   45889 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 01:38:50.319013   45889 command_runner.go:130] >       ],
	I0729 01:38:50.319021   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.319034   45889 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 01:38:50.319045   45889 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 01:38:50.319053   45889 command_runner.go:130] >       ],
	I0729 01:38:50.319071   45889 command_runner.go:130] >       "size": "750414",
	I0729 01:38:50.319080   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.319087   45889 command_runner.go:130] >         "value": "65535"
	I0729 01:38:50.319095   45889 command_runner.go:130] >       },
	I0729 01:38:50.319108   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.319116   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.319123   45889 command_runner.go:130] >       "pinned": true
	I0729 01:38:50.319130   45889 command_runner.go:130] >     }
	I0729 01:38:50.319133   45889 command_runner.go:130] >   ]
	I0729 01:38:50.319138   45889 command_runner.go:130] > }
	I0729 01:38:50.319330   45889 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 01:38:50.319343   45889 crio.go:433] Images already preloaded, skipping extraction
	I0729 01:38:50.319395   45889 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:38:50.354230   45889 command_runner.go:130] > {
	I0729 01:38:50.354251   45889 command_runner.go:130] >   "images": [
	I0729 01:38:50.354255   45889 command_runner.go:130] >     {
	I0729 01:38:50.354263   45889 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 01:38:50.354268   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.354276   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 01:38:50.354281   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354288   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.354301   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 01:38:50.354313   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 01:38:50.354318   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354325   45889 command_runner.go:130] >       "size": "87165492",
	I0729 01:38:50.354332   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.354337   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.354349   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.354356   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.354362   45889 command_runner.go:130] >     },
	I0729 01:38:50.354367   45889 command_runner.go:130] >     {
	I0729 01:38:50.354378   45889 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 01:38:50.354385   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.354394   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 01:38:50.354401   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354408   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.354419   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 01:38:50.354431   45889 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 01:38:50.354438   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354444   45889 command_runner.go:130] >       "size": "87174707",
	I0729 01:38:50.354448   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.354456   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.354463   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.354469   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.354476   45889 command_runner.go:130] >     },
	I0729 01:38:50.354481   45889 command_runner.go:130] >     {
	I0729 01:38:50.354494   45889 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 01:38:50.354503   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.354515   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 01:38:50.354521   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354529   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.354536   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 01:38:50.354550   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 01:38:50.354559   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354569   45889 command_runner.go:130] >       "size": "1363676",
	I0729 01:38:50.354578   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.354587   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.354604   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.354612   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.354616   45889 command_runner.go:130] >     },
	I0729 01:38:50.354622   45889 command_runner.go:130] >     {
	I0729 01:38:50.354631   45889 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 01:38:50.354641   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.354652   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 01:38:50.354660   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354669   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.354683   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 01:38:50.354700   45889 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 01:38:50.354706   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354713   45889 command_runner.go:130] >       "size": "31470524",
	I0729 01:38:50.354723   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.354732   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.354741   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.354750   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.354759   45889 command_runner.go:130] >     },
	I0729 01:38:50.354768   45889 command_runner.go:130] >     {
	I0729 01:38:50.354780   45889 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 01:38:50.354787   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.354793   45889 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 01:38:50.354802   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354812   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.354827   45889 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 01:38:50.354841   45889 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 01:38:50.354849   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354858   45889 command_runner.go:130] >       "size": "61245718",
	I0729 01:38:50.354866   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.354870   45889 command_runner.go:130] >       "username": "nonroot",
	I0729 01:38:50.354873   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.354879   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.354887   45889 command_runner.go:130] >     },
	I0729 01:38:50.354896   45889 command_runner.go:130] >     {
	I0729 01:38:50.354908   45889 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 01:38:50.354918   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.354928   45889 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 01:38:50.354936   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354946   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.354956   45889 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 01:38:50.354970   45889 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 01:38:50.354978   45889 command_runner.go:130] >       ],
	I0729 01:38:50.354988   45889 command_runner.go:130] >       "size": "150779692",
	I0729 01:38:50.354996   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.355003   45889 command_runner.go:130] >         "value": "0"
	I0729 01:38:50.355016   45889 command_runner.go:130] >       },
	I0729 01:38:50.355025   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.355032   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.355037   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.355042   45889 command_runner.go:130] >     },
	I0729 01:38:50.355047   45889 command_runner.go:130] >     {
	I0729 01:38:50.355069   45889 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 01:38:50.355079   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.355094   45889 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 01:38:50.355108   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355117   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.355129   45889 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 01:38:50.355141   45889 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 01:38:50.355150   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355158   45889 command_runner.go:130] >       "size": "117609954",
	I0729 01:38:50.355167   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.355176   45889 command_runner.go:130] >         "value": "0"
	I0729 01:38:50.355185   45889 command_runner.go:130] >       },
	I0729 01:38:50.355194   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.355203   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.355211   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.355217   45889 command_runner.go:130] >     },
	I0729 01:38:50.355221   45889 command_runner.go:130] >     {
	I0729 01:38:50.355235   45889 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 01:38:50.355245   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.355256   45889 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 01:38:50.355264   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355274   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.355295   45889 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 01:38:50.355305   45889 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 01:38:50.355310   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355317   45889 command_runner.go:130] >       "size": "112198984",
	I0729 01:38:50.355326   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.355336   45889 command_runner.go:130] >         "value": "0"
	I0729 01:38:50.355345   45889 command_runner.go:130] >       },
	I0729 01:38:50.355354   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.355363   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.355370   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.355378   45889 command_runner.go:130] >     },
	I0729 01:38:50.355382   45889 command_runner.go:130] >     {
	I0729 01:38:50.355394   45889 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 01:38:50.355403   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.355412   45889 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 01:38:50.355420   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355427   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.355441   45889 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 01:38:50.355459   45889 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 01:38:50.355466   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355470   45889 command_runner.go:130] >       "size": "85953945",
	I0729 01:38:50.355478   45889 command_runner.go:130] >       "uid": null,
	I0729 01:38:50.355488   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.355495   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.355504   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.355513   45889 command_runner.go:130] >     },
	I0729 01:38:50.355521   45889 command_runner.go:130] >     {
	I0729 01:38:50.355534   45889 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 01:38:50.355543   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.355552   45889 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 01:38:50.355558   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355563   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.355578   45889 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 01:38:50.355593   45889 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 01:38:50.355601   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355610   45889 command_runner.go:130] >       "size": "63051080",
	I0729 01:38:50.355619   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.355626   45889 command_runner.go:130] >         "value": "0"
	I0729 01:38:50.355633   45889 command_runner.go:130] >       },
	I0729 01:38:50.355637   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.355643   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.355649   45889 command_runner.go:130] >       "pinned": false
	I0729 01:38:50.355657   45889 command_runner.go:130] >     },
	I0729 01:38:50.355663   45889 command_runner.go:130] >     {
	I0729 01:38:50.355675   45889 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 01:38:50.355684   45889 command_runner.go:130] >       "repoTags": [
	I0729 01:38:50.355694   45889 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 01:38:50.355702   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355709   45889 command_runner.go:130] >       "repoDigests": [
	I0729 01:38:50.355721   45889 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 01:38:50.355732   45889 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 01:38:50.355741   45889 command_runner.go:130] >       ],
	I0729 01:38:50.355747   45889 command_runner.go:130] >       "size": "750414",
	I0729 01:38:50.355758   45889 command_runner.go:130] >       "uid": {
	I0729 01:38:50.355767   45889 command_runner.go:130] >         "value": "65535"
	I0729 01:38:50.355775   45889 command_runner.go:130] >       },
	I0729 01:38:50.355784   45889 command_runner.go:130] >       "username": "",
	I0729 01:38:50.355793   45889 command_runner.go:130] >       "spec": null,
	I0729 01:38:50.355801   45889 command_runner.go:130] >       "pinned": true
	I0729 01:38:50.355807   45889 command_runner.go:130] >     }
	I0729 01:38:50.355810   45889 command_runner.go:130] >   ]
	I0729 01:38:50.355818   45889 command_runner.go:130] > }
	I0729 01:38:50.355980   45889 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 01:38:50.355993   45889 cache_images.go:84] Images are preloaded, skipping loading
	I0729 01:38:50.356004   45889 kubeadm.go:934] updating node { 192.168.39.140 8443 v1.30.3 crio true true} ...
	I0729 01:38:50.356133   45889 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-060411 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-060411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 01:38:50.356215   45889 ssh_runner.go:195] Run: crio config
	I0729 01:38:50.397273   45889 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 01:38:50.397296   45889 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 01:38:50.397302   45889 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 01:38:50.397306   45889 command_runner.go:130] > #
	I0729 01:38:50.397312   45889 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 01:38:50.397318   45889 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 01:38:50.397327   45889 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 01:38:50.397353   45889 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 01:38:50.397360   45889 command_runner.go:130] > # reload'.
	I0729 01:38:50.397369   45889 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 01:38:50.397382   45889 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 01:38:50.397391   45889 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 01:38:50.397402   45889 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 01:38:50.397407   45889 command_runner.go:130] > [crio]
	I0729 01:38:50.397416   45889 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 01:38:50.397425   45889 command_runner.go:130] > # containers images, in this directory.
	I0729 01:38:50.397437   45889 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 01:38:50.397457   45889 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 01:38:50.397468   45889 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 01:38:50.397479   45889 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 01:38:50.397650   45889 command_runner.go:130] > # imagestore = ""
	I0729 01:38:50.397668   45889 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 01:38:50.397674   45889 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 01:38:50.397784   45889 command_runner.go:130] > storage_driver = "overlay"
	I0729 01:38:50.397796   45889 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 01:38:50.397805   45889 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 01:38:50.397812   45889 command_runner.go:130] > storage_option = [
	I0729 01:38:50.397982   45889 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 01:38:50.397991   45889 command_runner.go:130] > ]
	I0729 01:38:50.397997   45889 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 01:38:50.398013   45889 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 01:38:50.398206   45889 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 01:38:50.398221   45889 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 01:38:50.398230   45889 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 01:38:50.398237   45889 command_runner.go:130] > # always happen on a node reboot
	I0729 01:38:50.398520   45889 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 01:38:50.398535   45889 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 01:38:50.398541   45889 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 01:38:50.398548   45889 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 01:38:50.398653   45889 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 01:38:50.398670   45889 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 01:38:50.398683   45889 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 01:38:50.398915   45889 command_runner.go:130] > # internal_wipe = true
	I0729 01:38:50.398927   45889 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 01:38:50.398933   45889 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 01:38:50.399142   45889 command_runner.go:130] > # internal_repair = false
	I0729 01:38:50.399157   45889 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 01:38:50.399167   45889 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 01:38:50.399178   45889 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 01:38:50.399380   45889 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 01:38:50.399394   45889 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 01:38:50.399398   45889 command_runner.go:130] > [crio.api]
	I0729 01:38:50.399406   45889 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 01:38:50.399668   45889 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 01:38:50.399683   45889 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 01:38:50.399937   45889 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 01:38:50.399953   45889 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 01:38:50.399961   45889 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 01:38:50.400273   45889 command_runner.go:130] > # stream_port = "0"
	I0729 01:38:50.400288   45889 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 01:38:50.400296   45889 command_runner.go:130] > # stream_enable_tls = false
	I0729 01:38:50.400306   45889 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 01:38:50.400372   45889 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 01:38:50.400384   45889 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 01:38:50.400393   45889 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 01:38:50.400399   45889 command_runner.go:130] > # minutes.
	I0729 01:38:50.400407   45889 command_runner.go:130] > # stream_tls_cert = ""
	I0729 01:38:50.400424   45889 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 01:38:50.400437   45889 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 01:38:50.400445   45889 command_runner.go:130] > # stream_tls_key = ""
	I0729 01:38:50.400457   45889 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 01:38:50.400467   45889 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 01:38:50.400490   45889 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 01:38:50.400505   45889 command_runner.go:130] > # stream_tls_ca = ""
	I0729 01:38:50.400520   45889 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 01:38:50.400532   45889 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 01:38:50.400543   45889 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 01:38:50.400553   45889 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 01:38:50.400566   45889 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 01:38:50.400577   45889 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 01:38:50.400586   45889 command_runner.go:130] > [crio.runtime]
	I0729 01:38:50.400597   45889 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 01:38:50.400608   45889 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 01:38:50.400614   45889 command_runner.go:130] > # "nofile=1024:2048"
	I0729 01:38:50.400627   45889 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 01:38:50.400637   45889 command_runner.go:130] > # default_ulimits = [
	I0729 01:38:50.400644   45889 command_runner.go:130] > # ]
	I0729 01:38:50.400655   45889 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 01:38:50.400665   45889 command_runner.go:130] > # no_pivot = false
	I0729 01:38:50.400675   45889 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 01:38:50.400687   45889 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 01:38:50.400697   45889 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 01:38:50.400708   45889 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 01:38:50.400716   45889 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 01:38:50.400729   45889 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 01:38:50.400740   45889 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 01:38:50.400751   45889 command_runner.go:130] > # Cgroup setting for conmon
	I0729 01:38:50.400763   45889 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 01:38:50.400773   45889 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 01:38:50.400783   45889 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 01:38:50.400793   45889 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 01:38:50.400806   45889 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 01:38:50.400813   45889 command_runner.go:130] > conmon_env = [
	I0729 01:38:50.400822   45889 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 01:38:50.400833   45889 command_runner.go:130] > ]
	I0729 01:38:50.400842   45889 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 01:38:50.400856   45889 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 01:38:50.400866   45889 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 01:38:50.400872   45889 command_runner.go:130] > # default_env = [
	I0729 01:38:50.400884   45889 command_runner.go:130] > # ]
	I0729 01:38:50.400896   45889 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 01:38:50.400914   45889 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 01:38:50.400923   45889 command_runner.go:130] > # selinux = false
	I0729 01:38:50.400932   45889 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 01:38:50.400943   45889 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 01:38:50.400952   45889 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 01:38:50.400960   45889 command_runner.go:130] > # seccomp_profile = ""
	I0729 01:38:50.400973   45889 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 01:38:50.400982   45889 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 01:38:50.400995   45889 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 01:38:50.401005   45889 command_runner.go:130] > # which might increase security.
	I0729 01:38:50.401015   45889 command_runner.go:130] > # This option is currently deprecated,
	I0729 01:38:50.401027   45889 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 01:38:50.401033   45889 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 01:38:50.401045   45889 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 01:38:50.401058   45889 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 01:38:50.401070   45889 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 01:38:50.401083   45889 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 01:38:50.401094   45889 command_runner.go:130] > # This option supports live configuration reload.
	I0729 01:38:50.401101   45889 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 01:38:50.401122   45889 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 01:38:50.401133   45889 command_runner.go:130] > # the cgroup blockio controller.
	I0729 01:38:50.401142   45889 command_runner.go:130] > # blockio_config_file = ""
	I0729 01:38:50.401152   45889 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 01:38:50.401161   45889 command_runner.go:130] > # blockio parameters.
	I0729 01:38:50.401167   45889 command_runner.go:130] > # blockio_reload = false
	I0729 01:38:50.401178   45889 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 01:38:50.401184   45889 command_runner.go:130] > # irqbalance daemon.
	I0729 01:38:50.401189   45889 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 01:38:50.401196   45889 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 01:38:50.401202   45889 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 01:38:50.401209   45889 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 01:38:50.401217   45889 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 01:38:50.401225   45889 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 01:38:50.401231   45889 command_runner.go:130] > # This option supports live configuration reload.
	I0729 01:38:50.401239   45889 command_runner.go:130] > # rdt_config_file = ""
	I0729 01:38:50.401246   45889 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 01:38:50.401252   45889 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 01:38:50.401290   45889 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 01:38:50.401301   45889 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 01:38:50.401310   45889 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 01:38:50.401320   45889 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 01:38:50.401331   45889 command_runner.go:130] > # will be added.
	I0729 01:38:50.401338   45889 command_runner.go:130] > # default_capabilities = [
	I0729 01:38:50.401346   45889 command_runner.go:130] > # 	"CHOWN",
	I0729 01:38:50.401354   45889 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 01:38:50.401366   45889 command_runner.go:130] > # 	"FSETID",
	I0729 01:38:50.401372   45889 command_runner.go:130] > # 	"FOWNER",
	I0729 01:38:50.401380   45889 command_runner.go:130] > # 	"SETGID",
	I0729 01:38:50.401385   45889 command_runner.go:130] > # 	"SETUID",
	I0729 01:38:50.401394   45889 command_runner.go:130] > # 	"SETPCAP",
	I0729 01:38:50.401402   45889 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 01:38:50.401412   45889 command_runner.go:130] > # 	"KILL",
	I0729 01:38:50.401417   45889 command_runner.go:130] > # ]
	I0729 01:38:50.401431   45889 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 01:38:50.401444   45889 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 01:38:50.401454   45889 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 01:38:50.401466   45889 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 01:38:50.401478   45889 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 01:38:50.401488   45889 command_runner.go:130] > default_sysctls = [
	I0729 01:38:50.401496   45889 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 01:38:50.401504   45889 command_runner.go:130] > ]
	I0729 01:38:50.401511   45889 command_runner.go:130] > # List of devices on the host that a
	I0729 01:38:50.401524   45889 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 01:38:50.401534   45889 command_runner.go:130] > # allowed_devices = [
	I0729 01:38:50.401543   45889 command_runner.go:130] > # 	"/dev/fuse",
	I0729 01:38:50.401548   45889 command_runner.go:130] > # ]
	I0729 01:38:50.401559   45889 command_runner.go:130] > # List of additional devices. specified as
	I0729 01:38:50.401574   45889 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 01:38:50.401585   45889 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 01:38:50.401596   45889 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 01:38:50.401611   45889 command_runner.go:130] > # additional_devices = [
	I0729 01:38:50.401621   45889 command_runner.go:130] > # ]
	I0729 01:38:50.401629   45889 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 01:38:50.401642   45889 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 01:38:50.401652   45889 command_runner.go:130] > # 	"/etc/cdi",
	I0729 01:38:50.401658   45889 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 01:38:50.401666   45889 command_runner.go:130] > # ]
	I0729 01:38:50.401675   45889 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 01:38:50.401689   45889 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 01:38:50.401699   45889 command_runner.go:130] > # Defaults to false.
	I0729 01:38:50.401709   45889 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 01:38:50.401721   45889 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 01:38:50.401734   45889 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 01:38:50.401744   45889 command_runner.go:130] > # hooks_dir = [
	I0729 01:38:50.401752   45889 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 01:38:50.401760   45889 command_runner.go:130] > # ]
	I0729 01:38:50.401769   45889 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 01:38:50.401781   45889 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 01:38:50.401792   45889 command_runner.go:130] > # its default mounts from the following two files:
	I0729 01:38:50.401802   45889 command_runner.go:130] > #
	I0729 01:38:50.401811   45889 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 01:38:50.401824   45889 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 01:38:50.401835   45889 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 01:38:50.401843   45889 command_runner.go:130] > #
	I0729 01:38:50.401863   45889 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 01:38:50.401877   45889 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 01:38:50.401890   45889 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 01:38:50.401902   45889 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 01:38:50.401910   45889 command_runner.go:130] > #
	I0729 01:38:50.401915   45889 command_runner.go:130] > # default_mounts_file = ""
	I0729 01:38:50.401922   45889 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 01:38:50.401928   45889 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 01:38:50.401934   45889 command_runner.go:130] > pids_limit = 1024
	I0729 01:38:50.401940   45889 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 01:38:50.401946   45889 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 01:38:50.401954   45889 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 01:38:50.401967   45889 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 01:38:50.401972   45889 command_runner.go:130] > # log_size_max = -1
	I0729 01:38:50.401979   45889 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 01:38:50.401985   45889 command_runner.go:130] > # log_to_journald = false
	I0729 01:38:50.401991   45889 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 01:38:50.401997   45889 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 01:38:50.402004   45889 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 01:38:50.402010   45889 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 01:38:50.402016   45889 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 01:38:50.402022   45889 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 01:38:50.402027   45889 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 01:38:50.402031   45889 command_runner.go:130] > # read_only = false
	I0729 01:38:50.402037   45889 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 01:38:50.402043   45889 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 01:38:50.402051   45889 command_runner.go:130] > # live configuration reload.
	I0729 01:38:50.402057   45889 command_runner.go:130] > # log_level = "info"
	I0729 01:38:50.402068   45889 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 01:38:50.402078   45889 command_runner.go:130] > # This option supports live configuration reload.
	I0729 01:38:50.402087   45889 command_runner.go:130] > # log_filter = ""
	I0729 01:38:50.402096   45889 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 01:38:50.402110   45889 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 01:38:50.402120   45889 command_runner.go:130] > # separated by comma.
	I0729 01:38:50.402131   45889 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 01:38:50.402140   45889 command_runner.go:130] > # uid_mappings = ""
	I0729 01:38:50.402152   45889 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 01:38:50.402164   45889 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 01:38:50.402173   45889 command_runner.go:130] > # separated by comma.
	I0729 01:38:50.402185   45889 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 01:38:50.402194   45889 command_runner.go:130] > # gid_mappings = ""
	I0729 01:38:50.402203   45889 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 01:38:50.402212   45889 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 01:38:50.402218   45889 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 01:38:50.402227   45889 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 01:38:50.402231   45889 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 01:38:50.402240   45889 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 01:38:50.402245   45889 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 01:38:50.402265   45889 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 01:38:50.402280   45889 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 01:38:50.402290   45889 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 01:38:50.402299   45889 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 01:38:50.402312   45889 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 01:38:50.402324   45889 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 01:38:50.402336   45889 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 01:38:50.402345   45889 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 01:38:50.402358   45889 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 01:38:50.402369   45889 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 01:38:50.402377   45889 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 01:38:50.402387   45889 command_runner.go:130] > drop_infra_ctr = false
	I0729 01:38:50.402396   45889 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 01:38:50.402407   45889 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 01:38:50.402417   45889 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 01:38:50.402422   45889 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 01:38:50.402429   45889 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 01:38:50.402436   45889 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 01:38:50.402441   45889 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 01:38:50.402446   45889 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 01:38:50.402452   45889 command_runner.go:130] > # shared_cpuset = ""
	I0729 01:38:50.402458   45889 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 01:38:50.402464   45889 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 01:38:50.402468   45889 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 01:38:50.402477   45889 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 01:38:50.402483   45889 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 01:38:50.402488   45889 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 01:38:50.402495   45889 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 01:38:50.402501   45889 command_runner.go:130] > # enable_criu_support = false
	I0729 01:38:50.402512   45889 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 01:38:50.402523   45889 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 01:38:50.402533   45889 command_runner.go:130] > # enable_pod_events = false
	I0729 01:38:50.402545   45889 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 01:38:50.402555   45889 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 01:38:50.402566   45889 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 01:38:50.402572   45889 command_runner.go:130] > # default_runtime = "runc"
	I0729 01:38:50.402585   45889 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 01:38:50.402600   45889 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 01:38:50.402616   45889 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 01:38:50.402627   45889 command_runner.go:130] > # creation as a file is not desired either.
	I0729 01:38:50.402639   45889 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 01:38:50.402650   45889 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 01:38:50.402658   45889 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 01:38:50.402662   45889 command_runner.go:130] > # ]
	I0729 01:38:50.402667   45889 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 01:38:50.402675   45889 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 01:38:50.402682   45889 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 01:38:50.402689   45889 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 01:38:50.402692   45889 command_runner.go:130] > #
	I0729 01:38:50.402697   45889 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 01:38:50.402701   45889 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 01:38:50.402720   45889 command_runner.go:130] > # runtime_type = "oci"
	I0729 01:38:50.402729   45889 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 01:38:50.402736   45889 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 01:38:50.402747   45889 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 01:38:50.402755   45889 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 01:38:50.402763   45889 command_runner.go:130] > # monitor_env = []
	I0729 01:38:50.402772   45889 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 01:38:50.402781   45889 command_runner.go:130] > # allowed_annotations = []
	I0729 01:38:50.402790   45889 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 01:38:50.402797   45889 command_runner.go:130] > # Where:
	I0729 01:38:50.402802   45889 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 01:38:50.402810   45889 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 01:38:50.402815   45889 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 01:38:50.402823   45889 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 01:38:50.402828   45889 command_runner.go:130] > #   in $PATH.
	I0729 01:38:50.402838   45889 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 01:38:50.402852   45889 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 01:38:50.402866   45889 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 01:38:50.402876   45889 command_runner.go:130] > #   state.
	I0729 01:38:50.402887   45889 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 01:38:50.402898   45889 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 01:38:50.402912   45889 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 01:38:50.402923   45889 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 01:38:50.402933   45889 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 01:38:50.402944   45889 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 01:38:50.402954   45889 command_runner.go:130] > #   The currently recognized values are:
	I0729 01:38:50.402964   45889 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 01:38:50.402978   45889 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 01:38:50.402993   45889 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 01:38:50.403005   45889 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 01:38:50.403021   45889 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 01:38:50.403034   45889 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 01:38:50.403043   45889 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 01:38:50.403066   45889 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 01:38:50.403079   45889 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 01:38:50.403093   45889 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 01:38:50.403102   45889 command_runner.go:130] > #   deprecated option "conmon".
	I0729 01:38:50.403113   45889 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 01:38:50.403120   45889 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 01:38:50.403130   45889 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 01:38:50.403141   45889 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 01:38:50.403151   45889 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 01:38:50.403158   45889 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 01:38:50.403168   45889 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 01:38:50.403175   45889 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 01:38:50.403181   45889 command_runner.go:130] > #
	I0729 01:38:50.403187   45889 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 01:38:50.403191   45889 command_runner.go:130] > #
	I0729 01:38:50.403199   45889 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 01:38:50.403208   45889 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 01:38:50.403212   45889 command_runner.go:130] > #
	I0729 01:38:50.403220   45889 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 01:38:50.403229   45889 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 01:38:50.403234   45889 command_runner.go:130] > #
	I0729 01:38:50.403244   45889 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 01:38:50.403251   45889 command_runner.go:130] > # feature.
	I0729 01:38:50.403256   45889 command_runner.go:130] > #
	I0729 01:38:50.403271   45889 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 01:38:50.403285   45889 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 01:38:50.403298   45889 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 01:38:50.403311   45889 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 01:38:50.403323   45889 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 01:38:50.403329   45889 command_runner.go:130] > #
	I0729 01:38:50.403335   45889 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 01:38:50.403344   45889 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 01:38:50.403349   45889 command_runner.go:130] > #
	I0729 01:38:50.403355   45889 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 01:38:50.403362   45889 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 01:38:50.403365   45889 command_runner.go:130] > #
	I0729 01:38:50.403371   45889 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 01:38:50.403378   45889 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 01:38:50.403382   45889 command_runner.go:130] > # limitation.
	I0729 01:38:50.403389   45889 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 01:38:50.403393   45889 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 01:38:50.403399   45889 command_runner.go:130] > runtime_type = "oci"
	I0729 01:38:50.403403   45889 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 01:38:50.403409   45889 command_runner.go:130] > runtime_config_path = ""
	I0729 01:38:50.403414   45889 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 01:38:50.403420   45889 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 01:38:50.403423   45889 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 01:38:50.403429   45889 command_runner.go:130] > monitor_env = [
	I0729 01:38:50.403434   45889 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 01:38:50.403439   45889 command_runner.go:130] > ]
	I0729 01:38:50.403443   45889 command_runner.go:130] > privileged_without_host_devices = false
	I0729 01:38:50.403451   45889 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 01:38:50.403456   45889 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 01:38:50.403464   45889 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 01:38:50.403471   45889 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 01:38:50.403480   45889 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 01:38:50.403486   45889 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 01:38:50.403496   45889 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 01:38:50.403506   45889 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 01:38:50.403514   45889 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 01:38:50.403522   45889 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 01:38:50.403527   45889 command_runner.go:130] > # Example:
	I0729 01:38:50.403532   45889 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 01:38:50.403537   45889 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 01:38:50.403541   45889 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 01:38:50.403545   45889 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 01:38:50.403548   45889 command_runner.go:130] > # cpuset = 0
	I0729 01:38:50.403551   45889 command_runner.go:130] > # cpushares = "0-1"
	I0729 01:38:50.403554   45889 command_runner.go:130] > # Where:
	I0729 01:38:50.403561   45889 command_runner.go:130] > # The workload name is workload-type.
	I0729 01:38:50.403567   45889 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 01:38:50.403572   45889 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 01:38:50.403577   45889 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 01:38:50.403584   45889 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 01:38:50.403589   45889 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 01:38:50.403594   45889 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 01:38:50.403599   45889 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 01:38:50.403603   45889 command_runner.go:130] > # Default value is set to true
	I0729 01:38:50.403607   45889 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 01:38:50.403612   45889 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 01:38:50.403617   45889 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 01:38:50.403621   45889 command_runner.go:130] > # Default value is set to 'false'
	I0729 01:38:50.403624   45889 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 01:38:50.403630   45889 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 01:38:50.403633   45889 command_runner.go:130] > #
	I0729 01:38:50.403638   45889 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 01:38:50.403644   45889 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 01:38:50.403649   45889 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 01:38:50.403655   45889 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 01:38:50.403660   45889 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 01:38:50.403663   45889 command_runner.go:130] > [crio.image]
	I0729 01:38:50.403668   45889 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 01:38:50.403672   45889 command_runner.go:130] > # default_transport = "docker://"
	I0729 01:38:50.403677   45889 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 01:38:50.403682   45889 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 01:38:50.403686   45889 command_runner.go:130] > # global_auth_file = ""
	I0729 01:38:50.403691   45889 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 01:38:50.403696   45889 command_runner.go:130] > # This option supports live configuration reload.
	I0729 01:38:50.403700   45889 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 01:38:50.403705   45889 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 01:38:50.403712   45889 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 01:38:50.403717   45889 command_runner.go:130] > # This option supports live configuration reload.
	I0729 01:38:50.403721   45889 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 01:38:50.403729   45889 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 01:38:50.403734   45889 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 01:38:50.403744   45889 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 01:38:50.403751   45889 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 01:38:50.403755   45889 command_runner.go:130] > # pause_command = "/pause"
	I0729 01:38:50.403761   45889 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 01:38:50.403769   45889 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 01:38:50.403774   45889 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 01:38:50.403780   45889 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 01:38:50.403785   45889 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 01:38:50.403793   45889 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 01:38:50.403796   45889 command_runner.go:130] > # pinned_images = [
	I0729 01:38:50.403800   45889 command_runner.go:130] > # ]
	I0729 01:38:50.403807   45889 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 01:38:50.403815   45889 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 01:38:50.403822   45889 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 01:38:50.403830   45889 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 01:38:50.403835   45889 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 01:38:50.403840   45889 command_runner.go:130] > # signature_policy = ""
	I0729 01:38:50.403846   45889 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 01:38:50.403857   45889 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 01:38:50.403865   45889 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 01:38:50.403872   45889 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 01:38:50.403878   45889 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 01:38:50.403885   45889 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 01:38:50.403891   45889 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 01:38:50.403899   45889 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 01:38:50.403903   45889 command_runner.go:130] > # changing them here.
	I0729 01:38:50.403908   45889 command_runner.go:130] > # insecure_registries = [
	I0729 01:38:50.403912   45889 command_runner.go:130] > # ]
	I0729 01:38:50.403919   45889 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 01:38:50.403924   45889 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 01:38:50.403931   45889 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 01:38:50.403936   45889 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 01:38:50.403942   45889 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 01:38:50.403947   45889 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 01:38:50.403953   45889 command_runner.go:130] > # CNI plugins.
	I0729 01:38:50.403957   45889 command_runner.go:130] > [crio.network]
	I0729 01:38:50.403963   45889 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 01:38:50.403970   45889 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 01:38:50.403976   45889 command_runner.go:130] > # cni_default_network = ""
	I0729 01:38:50.403981   45889 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 01:38:50.403987   45889 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 01:38:50.403993   45889 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 01:38:50.403998   45889 command_runner.go:130] > # plugin_dirs = [
	I0729 01:38:50.404002   45889 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 01:38:50.404007   45889 command_runner.go:130] > # ]
	I0729 01:38:50.404013   45889 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 01:38:50.404019   45889 command_runner.go:130] > [crio.metrics]
	I0729 01:38:50.404023   45889 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 01:38:50.404028   45889 command_runner.go:130] > enable_metrics = true
	I0729 01:38:50.404032   45889 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 01:38:50.404035   45889 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 01:38:50.404041   45889 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 01:38:50.404049   45889 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 01:38:50.404054   45889 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 01:38:50.404060   45889 command_runner.go:130] > # metrics_collectors = [
	I0729 01:38:50.404064   45889 command_runner.go:130] > # 	"operations",
	I0729 01:38:50.404070   45889 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 01:38:50.404075   45889 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 01:38:50.404081   45889 command_runner.go:130] > # 	"operations_errors",
	I0729 01:38:50.404085   45889 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 01:38:50.404090   45889 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 01:38:50.404094   45889 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 01:38:50.404100   45889 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 01:38:50.404107   45889 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 01:38:50.404111   45889 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 01:38:50.404117   45889 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 01:38:50.404121   45889 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 01:38:50.404127   45889 command_runner.go:130] > # 	"containers_oom_total",
	I0729 01:38:50.404131   45889 command_runner.go:130] > # 	"containers_oom",
	I0729 01:38:50.404137   45889 command_runner.go:130] > # 	"processes_defunct",
	I0729 01:38:50.404140   45889 command_runner.go:130] > # 	"operations_total",
	I0729 01:38:50.404145   45889 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 01:38:50.404149   45889 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 01:38:50.404156   45889 command_runner.go:130] > # 	"operations_errors_total",
	I0729 01:38:50.404160   45889 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 01:38:50.404166   45889 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 01:38:50.404171   45889 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 01:38:50.404177   45889 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 01:38:50.404182   45889 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 01:38:50.404188   45889 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 01:38:50.404192   45889 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 01:38:50.404198   45889 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 01:38:50.404204   45889 command_runner.go:130] > # ]
	I0729 01:38:50.404211   45889 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 01:38:50.404215   45889 command_runner.go:130] > # metrics_port = 9090
	I0729 01:38:50.404221   45889 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 01:38:50.404225   45889 command_runner.go:130] > # metrics_socket = ""
	I0729 01:38:50.404232   45889 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 01:38:50.404238   45889 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 01:38:50.404245   45889 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 01:38:50.404251   45889 command_runner.go:130] > # certificate on any modification event.
	I0729 01:38:50.404255   45889 command_runner.go:130] > # metrics_cert = ""
	I0729 01:38:50.404260   45889 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 01:38:50.404267   45889 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 01:38:50.404271   45889 command_runner.go:130] > # metrics_key = ""
	I0729 01:38:50.404278   45889 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 01:38:50.404282   45889 command_runner.go:130] > [crio.tracing]
	I0729 01:38:50.404288   45889 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 01:38:50.404293   45889 command_runner.go:130] > # enable_tracing = false
	I0729 01:38:50.404299   45889 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 01:38:50.404305   45889 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 01:38:50.404312   45889 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 01:38:50.404318   45889 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 01:38:50.404323   45889 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 01:38:50.404328   45889 command_runner.go:130] > [crio.nri]
	I0729 01:38:50.404333   45889 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 01:38:50.404336   45889 command_runner.go:130] > # enable_nri = false
	I0729 01:38:50.404341   45889 command_runner.go:130] > # NRI socket to listen on.
	I0729 01:38:50.404346   45889 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 01:38:50.404352   45889 command_runner.go:130] > # NRI plugin directory to use.
	I0729 01:38:50.404357   45889 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 01:38:50.404363   45889 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 01:38:50.404368   45889 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 01:38:50.404375   45889 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 01:38:50.404379   45889 command_runner.go:130] > # nri_disable_connections = false
	I0729 01:38:50.404384   45889 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 01:38:50.404391   45889 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 01:38:50.404395   45889 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 01:38:50.404402   45889 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 01:38:50.404407   45889 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 01:38:50.404413   45889 command_runner.go:130] > [crio.stats]
	I0729 01:38:50.404418   45889 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 01:38:50.404425   45889 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 01:38:50.404429   45889 command_runner.go:130] > # stats_collection_period = 0
	I0729 01:38:50.404993   45889 command_runner.go:130] ! time="2024-07-29 01:38:50.359340534Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 01:38:50.405023   45889 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 01:38:50.405143   45889 cni.go:84] Creating CNI manager for ""
	I0729 01:38:50.405155   45889 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 01:38:50.405164   45889 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 01:38:50.405181   45889 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.140 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-060411 NodeName:multinode-060411 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 01:38:50.405301   45889 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-060411"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 01:38:50.405356   45889 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 01:38:50.415733   45889 command_runner.go:130] > kubeadm
	I0729 01:38:50.415745   45889 command_runner.go:130] > kubectl
	I0729 01:38:50.415748   45889 command_runner.go:130] > kubelet
	I0729 01:38:50.415765   45889 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 01:38:50.415810   45889 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 01:38:50.425796   45889 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0729 01:38:50.442463   45889 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 01:38:50.458971   45889 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0729 01:38:50.476088   45889 ssh_runner.go:195] Run: grep 192.168.39.140	control-plane.minikube.internal$ /etc/hosts
	I0729 01:38:50.480006   45889 command_runner.go:130] > 192.168.39.140	control-plane.minikube.internal
	I0729 01:38:50.480061   45889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:38:50.624732   45889 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:38:50.640460   45889 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411 for IP: 192.168.39.140
	I0729 01:38:50.640489   45889 certs.go:194] generating shared ca certs ...
	I0729 01:38:50.640511   45889 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:38:50.640687   45889 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 01:38:50.640751   45889 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 01:38:50.640763   45889 certs.go:256] generating profile certs ...
	I0729 01:38:50.640866   45889 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/client.key
	I0729 01:38:50.640940   45889 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/apiserver.key.cce4d0cc
	I0729 01:38:50.640987   45889 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/proxy-client.key
	I0729 01:38:50.641002   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 01:38:50.641021   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 01:38:50.641046   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 01:38:50.641070   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 01:38:50.641087   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 01:38:50.641104   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 01:38:50.641117   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 01:38:50.641127   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 01:38:50.641179   45889 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 01:38:50.641207   45889 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 01:38:50.641215   45889 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 01:38:50.641235   45889 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 01:38:50.641257   45889 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 01:38:50.641276   45889 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 01:38:50.641316   45889 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:38:50.641352   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> /usr/share/ca-certificates/166232.pem
	I0729 01:38:50.641368   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:38:50.641383   45889 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem -> /usr/share/ca-certificates/16623.pem
	I0729 01:38:50.641957   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 01:38:50.667969   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 01:38:50.693452   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 01:38:50.718452   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 01:38:50.741433   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 01:38:50.764363   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 01:38:50.788361   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 01:38:50.812303   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/multinode-060411/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 01:38:50.835346   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 01:38:50.858424   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 01:38:50.881115   45889 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 01:38:50.904003   45889 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 01:38:50.920431   45889 ssh_runner.go:195] Run: openssl version
	I0729 01:38:50.926644   45889 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 01:38:50.926707   45889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 01:38:50.937683   45889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 01:38:50.941922   45889 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 01:38:50.942053   45889 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 01:38:50.942110   45889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 01:38:50.947433   45889 command_runner.go:130] > 51391683
	I0729 01:38:50.947684   45889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 01:38:50.957408   45889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 01:38:50.968064   45889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 01:38:50.972212   45889 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 01:38:50.972242   45889 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 01:38:50.972269   45889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 01:38:50.977757   45889 command_runner.go:130] > 3ec20f2e
	I0729 01:38:50.977808   45889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 01:38:50.988070   45889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 01:38:50.998713   45889 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:38:51.002948   45889 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:38:51.002977   45889 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:38:51.003019   45889 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:38:51.008729   45889 command_runner.go:130] > b5213941
	I0729 01:38:51.008813   45889 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 01:38:51.018191   45889 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 01:38:51.022379   45889 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 01:38:51.022405   45889 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 01:38:51.022414   45889 command_runner.go:130] > Device: 253,1	Inode: 533291      Links: 1
	I0729 01:38:51.022423   45889 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 01:38:51.022432   45889 command_runner.go:130] > Access: 2024-07-29 01:31:50.386455805 +0000
	I0729 01:38:51.022443   45889 command_runner.go:130] > Modify: 2024-07-29 01:31:50.386455805 +0000
	I0729 01:38:51.022451   45889 command_runner.go:130] > Change: 2024-07-29 01:31:50.386455805 +0000
	I0729 01:38:51.022458   45889 command_runner.go:130] >  Birth: 2024-07-29 01:31:50.386455805 +0000
	I0729 01:38:51.022524   45889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 01:38:51.028877   45889 command_runner.go:130] > Certificate will not expire
	I0729 01:38:51.028957   45889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 01:38:51.034346   45889 command_runner.go:130] > Certificate will not expire
	I0729 01:38:51.034506   45889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 01:38:51.040285   45889 command_runner.go:130] > Certificate will not expire
	I0729 01:38:51.040335   45889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 01:38:51.046108   45889 command_runner.go:130] > Certificate will not expire
	I0729 01:38:51.046174   45889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 01:38:51.051674   45889 command_runner.go:130] > Certificate will not expire
	I0729 01:38:51.051734   45889 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 01:38:51.056980   45889 command_runner.go:130] > Certificate will not expire
	I0729 01:38:51.057131   45889 kubeadm.go:392] StartCluster: {Name:multinode-060411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-060411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.233 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.190 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:38:51.057235   45889 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 01:38:51.057287   45889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 01:38:51.103358   45889 command_runner.go:130] > 18c694585ad583e71d9bf791d5e8265f1ad31313be119f7a0fc626f0424b0e54
	I0729 01:38:51.103390   45889 command_runner.go:130] > f9d2ae5c2528f022c85cce760818233b3c1a481a3791512f3721584a59ad7316
	I0729 01:38:51.103401   45889 command_runner.go:130] > a309347431949a047c73522b7a2b599b2895342273fcdd47644ed42ce01a16b1
	I0729 01:38:51.103410   45889 command_runner.go:130] > ef2d721b7d276811ff1da16e5522d610d1f22e22ca78fa5d2ce7b1b803ff655e
	I0729 01:38:51.103419   45889 command_runner.go:130] > bcee73846b860be33849e164a556e11985b5265784ca290324b297140699a1de
	I0729 01:38:51.103428   45889 command_runner.go:130] > a841bfa674c596af8d9a8081805f72f20e4bbd11af6323a38a626a429158f2b4
	I0729 01:38:51.103438   45889 command_runner.go:130] > ded106b4ad30b0a5a3e4f673f331c2c718da977fbbab17cf9305ea41e88a02fd
	I0729 01:38:51.103471   45889 command_runner.go:130] > c9289f8f5185e0bd4b1bf3bc7dc0c588f0eb55fb5ce88ba34069936ab9877ab8
	I0729 01:38:51.103497   45889 cri.go:89] found id: "18c694585ad583e71d9bf791d5e8265f1ad31313be119f7a0fc626f0424b0e54"
	I0729 01:38:51.103506   45889 cri.go:89] found id: "f9d2ae5c2528f022c85cce760818233b3c1a481a3791512f3721584a59ad7316"
	I0729 01:38:51.103510   45889 cri.go:89] found id: "a309347431949a047c73522b7a2b599b2895342273fcdd47644ed42ce01a16b1"
	I0729 01:38:51.103514   45889 cri.go:89] found id: "ef2d721b7d276811ff1da16e5522d610d1f22e22ca78fa5d2ce7b1b803ff655e"
	I0729 01:38:51.103517   45889 cri.go:89] found id: "bcee73846b860be33849e164a556e11985b5265784ca290324b297140699a1de"
	I0729 01:38:51.103521   45889 cri.go:89] found id: "a841bfa674c596af8d9a8081805f72f20e4bbd11af6323a38a626a429158f2b4"
	I0729 01:38:51.103526   45889 cri.go:89] found id: "ded106b4ad30b0a5a3e4f673f331c2c718da977fbbab17cf9305ea41e88a02fd"
	I0729 01:38:51.103529   45889 cri.go:89] found id: "c9289f8f5185e0bd4b1bf3bc7dc0c588f0eb55fb5ce88ba34069936ab9877ab8"
	I0729 01:38:51.103531   45889 cri.go:89] found id: ""
	I0729 01:38:51.103580   45889 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.862300821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722217383862274627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88f39ffc-76b4-4508-872e-2b1d2af12472 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.862798469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0e94644-8003-4549-964f-44ab04084ac1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.862850533Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0e94644-8003-4549-964f-44ab04084ac1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.863327452Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4df6a6ff674dd8033a33202610a0f16d48c77e0cada9eb311619083085a9261d,PodSandboxId:405ee6e785e7998ec4bbbfb51cdc97159c62367cd2546f4b5506ef13bf5771ac,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722217171666491621,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lfmwp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11b7cc27-3dde-47b9-afd8-649382e4ad37,},Annotations:map[string]string{io.kubernetes.container.hash: cab4818a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae351cb3c920c2080c10a102d162cdfc9004a93dfb6bb88c4d8fddf893b0d442,PodSandboxId:e0f07bc4f18a758cb5d774db87eab2d4784f5d186658f9ac8cae5585da52d6ea,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722217138198469252,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8csbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd59518-57af-4a69-8697-f7fbb6a51b5e,},Annotations:map[string]string{io.kubernetes.container.hash: ae1b45d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba2a2fb41f03c57c8bad8e8b905876fd3f895b7bd308a6e1b679d57f6e2e4ae,PodSandboxId:67b79e167c379c2f04c49debc43808fe1f5d38644f688827d42acb22b464ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722217138039587135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnz72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f01ff-afc1-464e-a3f2-e7b7d11203ad,},Annotations:map[string]string{io.kubernetes.container.hash: 2bbf6752,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ce34270fcd5154c5d87a9dab0259ba588d62fdb1cf925d42a21c2892c06846,PodSandboxId:30c77960c991c68bfb605ec4b96f0a166d3ce8ab8bb1902fefbda25f18a33a02,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722217137952062429,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7k6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fe5ccd-46a0-4197-a687-af0fca1f518d,},Annotations:map[string]
string{io.kubernetes.container.hash: a1fc7fcb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dd0c12ffb6b7317cd7fd021123eb9ee9e6c15c1b638f2bcf66703c57011ddd8,PodSandboxId:7eca6d986748f1d672526ce7dcb3b5e1be1fb2bb630528fea17fe01f189edbb8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722217137968648862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83dec14c-5f93-4dee-bb62-52cae06307f7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 50e1a7c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299e426be4a5586af1a511622ce44a07ee065f9ac07dadee9a2d3975b4ceaeb9,PodSandboxId:61f6c9eb7c53d44ad131dba84f1fffb58330dd032f0899f56676ed03c9983108,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722217133135442803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d046bb15b263685df44f0950ef7600a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed1f7e7,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73231afd5e9252c0fa26b35dd94ded018f615b311998c54a013e637864ee0fb,PodSandboxId:d2683e5e7e5d677b5c594d005f50dca618f583c1fce990a98033d3b20f43f37f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722217133121108172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46e732ea87b9c252d00d262efc8b3fe8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6add5858278eb1e54d49eb8e86219474ef87996825592ed5e50f1f39ad079277,PodSandboxId:7e5ba26a31f116ded4c6e5d444cc44c731ceb48d425ee5890ab9f58cdcaeb6a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722217133073237610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcff5185417c81d7d28fd554089e4bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff07df0b86935ecddcd897c463f867356e3b9b3c0e3efbd5c2569ca981c25b4,PodSandboxId:de0369d916d142fd12ee337ab45d1e83672edab558988b97db4f17871173f4e0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722217133020266652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a268c1ff6ce2ff894eb8597f240e527e,},Annotations:map[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf650ce9fd8f2b55a3d9d3c70320ed1f30e2531022ce42278abb7767ae0407e6,PodSandboxId:4fb572bd767897446a4b3edb1570d02d9b06e6ee7331b9ca273a4dc2fa57c98a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722216806207586068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lfmwp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11b7cc27-3dde-47b9-afd8-649382e4ad37,},Annotations:map[string]string{io.kubernetes.container.hash: cab4818a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c694585ad583e71d9bf791d5e8265f1ad31313be119f7a0fc626f0424b0e54,PodSandboxId:61f6ff5d6d8d140d466c4d97960eb698d853c3e9779f704a5c02ae38697804fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722216750095083997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnz72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f01ff-afc1-464e-a3f2-e7b7d11203ad,},Annotations:map[string]string{io.kubernetes.container.hash: 2bbf6752,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d2ae5c2528f022c85cce760818233b3c1a481a3791512f3721584a59ad7316,PodSandboxId:8d777617fd166695c41da97bd8c161db023ad04c8fd440b47dd834935bab25ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722216749771639550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 83dec14c-5f93-4dee-bb62-52cae06307f7,},Annotations:map[string]string{io.kubernetes.container.hash: 50e1a7c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a309347431949a047c73522b7a2b599b2895342273fcdd47644ed42ce01a16b1,PodSandboxId:bf7dd7f13d3047665b6b15de3a78cc7b9a3f73d19386a1fa3196c0f59ac0906b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722216737871867839,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8csbb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 6fd59518-57af-4a69-8697-f7fbb6a51b5e,},Annotations:map[string]string{io.kubernetes.container.hash: ae1b45d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef2d721b7d276811ff1da16e5522d610d1f22e22ca78fa5d2ce7b1b803ff655e,PodSandboxId:f2093ca047b64c97dd8e41a53f8aceb05222562d1ed159915625cc373c2e578f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722216734184544509,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7k6j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 18fe5ccd-46a0-4197-a687-af0fca1f518d,},Annotations:map[string]string{io.kubernetes.container.hash: a1fc7fcb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcee73846b860be33849e164a556e11985b5265784ca290324b297140699a1de,PodSandboxId:3c5353bc3d71912460125f54eb2dbaa41c60cc780529c947fc4589fbda64c02a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722216714494739513,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d046bb15b263685df44f0950ef760
0a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed1f7e7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a841bfa674c596af8d9a8081805f72f20e4bbd11af6323a38a626a429158f2b4,PodSandboxId:89c607e32a889420acea1fd60a82823ebaac89b77db23a2a07cadee800264715,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722216714493846493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46e732ea87b9c252d0
0d262efc8b3fe8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded106b4ad30b0a5a3e4f673f331c2c718da977fbbab17cf9305ea41e88a02fd,PodSandboxId:b3524d640c8536a390a5adf97d64a96b386284af9d0b0b45095475f0dfc63dc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722216714482338978,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcff5185417c81d7d28fd554089e4bd9,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9289f8f5185e0bd4b1bf3bc7dc0c588f0eb55fb5ce88ba34069936ab9877ab8,PodSandboxId:1ec80da5a502788fccad616b49c4e3e655ebf6a622d390c2e7af479374bb4e83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722216714386150424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a268c1ff6ce2ff894eb8597f240e527e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0e94644-8003-4549-964f-44ab04084ac1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.905193413Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20e4c52f-40f7-4dac-b53a-3a1bccb72117 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.905266352Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20e4c52f-40f7-4dac-b53a-3a1bccb72117 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.906160642Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7c899db-460a-4023-8566-f5c66dbd21cb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.906742683Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722217383906720189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7c899db-460a-4023-8566-f5c66dbd21cb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.907242892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33746b76-72a5-4fed-8dcd-302da46b286e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.907294790Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33746b76-72a5-4fed-8dcd-302da46b286e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.907638030Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4df6a6ff674dd8033a33202610a0f16d48c77e0cada9eb311619083085a9261d,PodSandboxId:405ee6e785e7998ec4bbbfb51cdc97159c62367cd2546f4b5506ef13bf5771ac,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722217171666491621,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lfmwp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11b7cc27-3dde-47b9-afd8-649382e4ad37,},Annotations:map[string]string{io.kubernetes.container.hash: cab4818a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae351cb3c920c2080c10a102d162cdfc9004a93dfb6bb88c4d8fddf893b0d442,PodSandboxId:e0f07bc4f18a758cb5d774db87eab2d4784f5d186658f9ac8cae5585da52d6ea,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722217138198469252,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8csbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd59518-57af-4a69-8697-f7fbb6a51b5e,},Annotations:map[string]string{io.kubernetes.container.hash: ae1b45d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba2a2fb41f03c57c8bad8e8b905876fd3f895b7bd308a6e1b679d57f6e2e4ae,PodSandboxId:67b79e167c379c2f04c49debc43808fe1f5d38644f688827d42acb22b464ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722217138039587135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnz72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f01ff-afc1-464e-a3f2-e7b7d11203ad,},Annotations:map[string]string{io.kubernetes.container.hash: 2bbf6752,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ce34270fcd5154c5d87a9dab0259ba588d62fdb1cf925d42a21c2892c06846,PodSandboxId:30c77960c991c68bfb605ec4b96f0a166d3ce8ab8bb1902fefbda25f18a33a02,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722217137952062429,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7k6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fe5ccd-46a0-4197-a687-af0fca1f518d,},Annotations:map[string]
string{io.kubernetes.container.hash: a1fc7fcb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dd0c12ffb6b7317cd7fd021123eb9ee9e6c15c1b638f2bcf66703c57011ddd8,PodSandboxId:7eca6d986748f1d672526ce7dcb3b5e1be1fb2bb630528fea17fe01f189edbb8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722217137968648862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83dec14c-5f93-4dee-bb62-52cae06307f7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 50e1a7c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299e426be4a5586af1a511622ce44a07ee065f9ac07dadee9a2d3975b4ceaeb9,PodSandboxId:61f6c9eb7c53d44ad131dba84f1fffb58330dd032f0899f56676ed03c9983108,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722217133135442803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d046bb15b263685df44f0950ef7600a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed1f7e7,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73231afd5e9252c0fa26b35dd94ded018f615b311998c54a013e637864ee0fb,PodSandboxId:d2683e5e7e5d677b5c594d005f50dca618f583c1fce990a98033d3b20f43f37f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722217133121108172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46e732ea87b9c252d00d262efc8b3fe8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6add5858278eb1e54d49eb8e86219474ef87996825592ed5e50f1f39ad079277,PodSandboxId:7e5ba26a31f116ded4c6e5d444cc44c731ceb48d425ee5890ab9f58cdcaeb6a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722217133073237610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcff5185417c81d7d28fd554089e4bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff07df0b86935ecddcd897c463f867356e3b9b3c0e3efbd5c2569ca981c25b4,PodSandboxId:de0369d916d142fd12ee337ab45d1e83672edab558988b97db4f17871173f4e0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722217133020266652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a268c1ff6ce2ff894eb8597f240e527e,},Annotations:map[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf650ce9fd8f2b55a3d9d3c70320ed1f30e2531022ce42278abb7767ae0407e6,PodSandboxId:4fb572bd767897446a4b3edb1570d02d9b06e6ee7331b9ca273a4dc2fa57c98a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722216806207586068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lfmwp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11b7cc27-3dde-47b9-afd8-649382e4ad37,},Annotations:map[string]string{io.kubernetes.container.hash: cab4818a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c694585ad583e71d9bf791d5e8265f1ad31313be119f7a0fc626f0424b0e54,PodSandboxId:61f6ff5d6d8d140d466c4d97960eb698d853c3e9779f704a5c02ae38697804fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722216750095083997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnz72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f01ff-afc1-464e-a3f2-e7b7d11203ad,},Annotations:map[string]string{io.kubernetes.container.hash: 2bbf6752,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d2ae5c2528f022c85cce760818233b3c1a481a3791512f3721584a59ad7316,PodSandboxId:8d777617fd166695c41da97bd8c161db023ad04c8fd440b47dd834935bab25ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722216749771639550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 83dec14c-5f93-4dee-bb62-52cae06307f7,},Annotations:map[string]string{io.kubernetes.container.hash: 50e1a7c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a309347431949a047c73522b7a2b599b2895342273fcdd47644ed42ce01a16b1,PodSandboxId:bf7dd7f13d3047665b6b15de3a78cc7b9a3f73d19386a1fa3196c0f59ac0906b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722216737871867839,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8csbb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 6fd59518-57af-4a69-8697-f7fbb6a51b5e,},Annotations:map[string]string{io.kubernetes.container.hash: ae1b45d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef2d721b7d276811ff1da16e5522d610d1f22e22ca78fa5d2ce7b1b803ff655e,PodSandboxId:f2093ca047b64c97dd8e41a53f8aceb05222562d1ed159915625cc373c2e578f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722216734184544509,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7k6j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 18fe5ccd-46a0-4197-a687-af0fca1f518d,},Annotations:map[string]string{io.kubernetes.container.hash: a1fc7fcb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcee73846b860be33849e164a556e11985b5265784ca290324b297140699a1de,PodSandboxId:3c5353bc3d71912460125f54eb2dbaa41c60cc780529c947fc4589fbda64c02a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722216714494739513,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d046bb15b263685df44f0950ef760
0a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed1f7e7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a841bfa674c596af8d9a8081805f72f20e4bbd11af6323a38a626a429158f2b4,PodSandboxId:89c607e32a889420acea1fd60a82823ebaac89b77db23a2a07cadee800264715,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722216714493846493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46e732ea87b9c252d0
0d262efc8b3fe8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded106b4ad30b0a5a3e4f673f331c2c718da977fbbab17cf9305ea41e88a02fd,PodSandboxId:b3524d640c8536a390a5adf97d64a96b386284af9d0b0b45095475f0dfc63dc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722216714482338978,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcff5185417c81d7d28fd554089e4bd9,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9289f8f5185e0bd4b1bf3bc7dc0c588f0eb55fb5ce88ba34069936ab9877ab8,PodSandboxId:1ec80da5a502788fccad616b49c4e3e655ebf6a622d390c2e7af479374bb4e83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722216714386150424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a268c1ff6ce2ff894eb8597f240e527e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33746b76-72a5-4fed-8dcd-302da46b286e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.950288973Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2121f3b9-80e4-452c-9da4-ac7b1835e002 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.950392234Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2121f3b9-80e4-452c-9da4-ac7b1835e002 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.951718055Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27cc6bcf-4ec4-46ff-a35b-3d6d01b9e93a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.952485634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722217383952456893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27cc6bcf-4ec4-46ff-a35b-3d6d01b9e93a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.953436178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c54fa01-6757-412f-bdfa-8c86bc8c0941 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.953490602Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c54fa01-6757-412f-bdfa-8c86bc8c0941 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:43:03 multinode-060411 crio[2875]: time="2024-07-29 01:43:03.953822264Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4df6a6ff674dd8033a33202610a0f16d48c77e0cada9eb311619083085a9261d,PodSandboxId:405ee6e785e7998ec4bbbfb51cdc97159c62367cd2546f4b5506ef13bf5771ac,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722217171666491621,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lfmwp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11b7cc27-3dde-47b9-afd8-649382e4ad37,},Annotations:map[string]string{io.kubernetes.container.hash: cab4818a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae351cb3c920c2080c10a102d162cdfc9004a93dfb6bb88c4d8fddf893b0d442,PodSandboxId:e0f07bc4f18a758cb5d774db87eab2d4784f5d186658f9ac8cae5585da52d6ea,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722217138198469252,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8csbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd59518-57af-4a69-8697-f7fbb6a51b5e,},Annotations:map[string]string{io.kubernetes.container.hash: ae1b45d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba2a2fb41f03c57c8bad8e8b905876fd3f895b7bd308a6e1b679d57f6e2e4ae,PodSandboxId:67b79e167c379c2f04c49debc43808fe1f5d38644f688827d42acb22b464ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722217138039587135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnz72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f01ff-afc1-464e-a3f2-e7b7d11203ad,},Annotations:map[string]string{io.kubernetes.container.hash: 2bbf6752,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ce34270fcd5154c5d87a9dab0259ba588d62fdb1cf925d42a21c2892c06846,PodSandboxId:30c77960c991c68bfb605ec4b96f0a166d3ce8ab8bb1902fefbda25f18a33a02,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722217137952062429,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7k6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fe5ccd-46a0-4197-a687-af0fca1f518d,},Annotations:map[string]
string{io.kubernetes.container.hash: a1fc7fcb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dd0c12ffb6b7317cd7fd021123eb9ee9e6c15c1b638f2bcf66703c57011ddd8,PodSandboxId:7eca6d986748f1d672526ce7dcb3b5e1be1fb2bb630528fea17fe01f189edbb8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722217137968648862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83dec14c-5f93-4dee-bb62-52cae06307f7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 50e1a7c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299e426be4a5586af1a511622ce44a07ee065f9ac07dadee9a2d3975b4ceaeb9,PodSandboxId:61f6c9eb7c53d44ad131dba84f1fffb58330dd032f0899f56676ed03c9983108,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722217133135442803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d046bb15b263685df44f0950ef7600a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed1f7e7,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73231afd5e9252c0fa26b35dd94ded018f615b311998c54a013e637864ee0fb,PodSandboxId:d2683e5e7e5d677b5c594d005f50dca618f583c1fce990a98033d3b20f43f37f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722217133121108172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46e732ea87b9c252d00d262efc8b3fe8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6add5858278eb1e54d49eb8e86219474ef87996825592ed5e50f1f39ad079277,PodSandboxId:7e5ba26a31f116ded4c6e5d444cc44c731ceb48d425ee5890ab9f58cdcaeb6a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722217133073237610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcff5185417c81d7d28fd554089e4bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff07df0b86935ecddcd897c463f867356e3b9b3c0e3efbd5c2569ca981c25b4,PodSandboxId:de0369d916d142fd12ee337ab45d1e83672edab558988b97db4f17871173f4e0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722217133020266652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a268c1ff6ce2ff894eb8597f240e527e,},Annotations:map[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf650ce9fd8f2b55a3d9d3c70320ed1f30e2531022ce42278abb7767ae0407e6,PodSandboxId:4fb572bd767897446a4b3edb1570d02d9b06e6ee7331b9ca273a4dc2fa57c98a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722216806207586068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lfmwp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11b7cc27-3dde-47b9-afd8-649382e4ad37,},Annotations:map[string]string{io.kubernetes.container.hash: cab4818a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c694585ad583e71d9bf791d5e8265f1ad31313be119f7a0fc626f0424b0e54,PodSandboxId:61f6ff5d6d8d140d466c4d97960eb698d853c3e9779f704a5c02ae38697804fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722216750095083997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnz72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f01ff-afc1-464e-a3f2-e7b7d11203ad,},Annotations:map[string]string{io.kubernetes.container.hash: 2bbf6752,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d2ae5c2528f022c85cce760818233b3c1a481a3791512f3721584a59ad7316,PodSandboxId:8d777617fd166695c41da97bd8c161db023ad04c8fd440b47dd834935bab25ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722216749771639550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 83dec14c-5f93-4dee-bb62-52cae06307f7,},Annotations:map[string]string{io.kubernetes.container.hash: 50e1a7c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a309347431949a047c73522b7a2b599b2895342273fcdd47644ed42ce01a16b1,PodSandboxId:bf7dd7f13d3047665b6b15de3a78cc7b9a3f73d19386a1fa3196c0f59ac0906b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722216737871867839,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8csbb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 6fd59518-57af-4a69-8697-f7fbb6a51b5e,},Annotations:map[string]string{io.kubernetes.container.hash: ae1b45d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef2d721b7d276811ff1da16e5522d610d1f22e22ca78fa5d2ce7b1b803ff655e,PodSandboxId:f2093ca047b64c97dd8e41a53f8aceb05222562d1ed159915625cc373c2e578f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722216734184544509,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7k6j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 18fe5ccd-46a0-4197-a687-af0fca1f518d,},Annotations:map[string]string{io.kubernetes.container.hash: a1fc7fcb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcee73846b860be33849e164a556e11985b5265784ca290324b297140699a1de,PodSandboxId:3c5353bc3d71912460125f54eb2dbaa41c60cc780529c947fc4589fbda64c02a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722216714494739513,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d046bb15b263685df44f0950ef760
0a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed1f7e7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a841bfa674c596af8d9a8081805f72f20e4bbd11af6323a38a626a429158f2b4,PodSandboxId:89c607e32a889420acea1fd60a82823ebaac89b77db23a2a07cadee800264715,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722216714493846493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46e732ea87b9c252d0
0d262efc8b3fe8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded106b4ad30b0a5a3e4f673f331c2c718da977fbbab17cf9305ea41e88a02fd,PodSandboxId:b3524d640c8536a390a5adf97d64a96b386284af9d0b0b45095475f0dfc63dc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722216714482338978,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcff5185417c81d7d28fd554089e4bd9,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9289f8f5185e0bd4b1bf3bc7dc0c588f0eb55fb5ce88ba34069936ab9877ab8,PodSandboxId:1ec80da5a502788fccad616b49c4e3e655ebf6a622d390c2e7af479374bb4e83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722216714386150424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a268c1ff6ce2ff894eb8597f240e527e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c54fa01-6757-412f-bdfa-8c86bc8c0941 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:43:04 multinode-060411 crio[2875]: time="2024-07-29 01:43:04.001568780Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a9e974d-87a8-470f-9d60-e60528c56581 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:43:04 multinode-060411 crio[2875]: time="2024-07-29 01:43:04.001672957Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a9e974d-87a8-470f-9d60-e60528c56581 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:43:04 multinode-060411 crio[2875]: time="2024-07-29 01:43:04.003397998Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ecac814b-0f81-41fa-97ee-73f7b64d93c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:43:04 multinode-060411 crio[2875]: time="2024-07-29 01:43:04.004335122Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722217384004305678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ecac814b-0f81-41fa-97ee-73f7b64d93c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:43:04 multinode-060411 crio[2875]: time="2024-07-29 01:43:04.005116581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b05c23a-38f9-4450-aa67-c347e4cd62ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:43:04 multinode-060411 crio[2875]: time="2024-07-29 01:43:04.005178391Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b05c23a-38f9-4450-aa67-c347e4cd62ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:43:04 multinode-060411 crio[2875]: time="2024-07-29 01:43:04.005569138Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4df6a6ff674dd8033a33202610a0f16d48c77e0cada9eb311619083085a9261d,PodSandboxId:405ee6e785e7998ec4bbbfb51cdc97159c62367cd2546f4b5506ef13bf5771ac,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722217171666491621,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lfmwp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11b7cc27-3dde-47b9-afd8-649382e4ad37,},Annotations:map[string]string{io.kubernetes.container.hash: cab4818a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae351cb3c920c2080c10a102d162cdfc9004a93dfb6bb88c4d8fddf893b0d442,PodSandboxId:e0f07bc4f18a758cb5d774db87eab2d4784f5d186658f9ac8cae5585da52d6ea,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722217138198469252,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8csbb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd59518-57af-4a69-8697-f7fbb6a51b5e,},Annotations:map[string]string{io.kubernetes.container.hash: ae1b45d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba2a2fb41f03c57c8bad8e8b905876fd3f895b7bd308a6e1b679d57f6e2e4ae,PodSandboxId:67b79e167c379c2f04c49debc43808fe1f5d38644f688827d42acb22b464ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722217138039587135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnz72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f01ff-afc1-464e-a3f2-e7b7d11203ad,},Annotations:map[string]string{io.kubernetes.container.hash: 2bbf6752,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ce34270fcd5154c5d87a9dab0259ba588d62fdb1cf925d42a21c2892c06846,PodSandboxId:30c77960c991c68bfb605ec4b96f0a166d3ce8ab8bb1902fefbda25f18a33a02,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722217137952062429,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7k6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18fe5ccd-46a0-4197-a687-af0fca1f518d,},Annotations:map[string]
string{io.kubernetes.container.hash: a1fc7fcb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dd0c12ffb6b7317cd7fd021123eb9ee9e6c15c1b638f2bcf66703c57011ddd8,PodSandboxId:7eca6d986748f1d672526ce7dcb3b5e1be1fb2bb630528fea17fe01f189edbb8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722217137968648862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83dec14c-5f93-4dee-bb62-52cae06307f7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 50e1a7c7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:299e426be4a5586af1a511622ce44a07ee065f9ac07dadee9a2d3975b4ceaeb9,PodSandboxId:61f6c9eb7c53d44ad131dba84f1fffb58330dd032f0899f56676ed03c9983108,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722217133135442803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d046bb15b263685df44f0950ef7600a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed1f7e7,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e73231afd5e9252c0fa26b35dd94ded018f615b311998c54a013e637864ee0fb,PodSandboxId:d2683e5e7e5d677b5c594d005f50dca618f583c1fce990a98033d3b20f43f37f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722217133121108172,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46e732ea87b9c252d00d262efc8b3fe8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6add5858278eb1e54d49eb8e86219474ef87996825592ed5e50f1f39ad079277,PodSandboxId:7e5ba26a31f116ded4c6e5d444cc44c731ceb48d425ee5890ab9f58cdcaeb6a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722217133073237610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcff5185417c81d7d28fd554089e4bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff07df0b86935ecddcd897c463f867356e3b9b3c0e3efbd5c2569ca981c25b4,PodSandboxId:de0369d916d142fd12ee337ab45d1e83672edab558988b97db4f17871173f4e0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722217133020266652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a268c1ff6ce2ff894eb8597f240e527e,},Annotations:map[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf650ce9fd8f2b55a3d9d3c70320ed1f30e2531022ce42278abb7767ae0407e6,PodSandboxId:4fb572bd767897446a4b3edb1570d02d9b06e6ee7331b9ca273a4dc2fa57c98a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722216806207586068,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lfmwp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 11b7cc27-3dde-47b9-afd8-649382e4ad37,},Annotations:map[string]string{io.kubernetes.container.hash: cab4818a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c694585ad583e71d9bf791d5e8265f1ad31313be119f7a0fc626f0424b0e54,PodSandboxId:61f6ff5d6d8d140d466c4d97960eb698d853c3e9779f704a5c02ae38697804fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722216750095083997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mnz72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f01ff-afc1-464e-a3f2-e7b7d11203ad,},Annotations:map[string]string{io.kubernetes.container.hash: 2bbf6752,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d2ae5c2528f022c85cce760818233b3c1a481a3791512f3721584a59ad7316,PodSandboxId:8d777617fd166695c41da97bd8c161db023ad04c8fd440b47dd834935bab25ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722216749771639550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 83dec14c-5f93-4dee-bb62-52cae06307f7,},Annotations:map[string]string{io.kubernetes.container.hash: 50e1a7c7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a309347431949a047c73522b7a2b599b2895342273fcdd47644ed42ce01a16b1,PodSandboxId:bf7dd7f13d3047665b6b15de3a78cc7b9a3f73d19386a1fa3196c0f59ac0906b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722216737871867839,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8csbb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 6fd59518-57af-4a69-8697-f7fbb6a51b5e,},Annotations:map[string]string{io.kubernetes.container.hash: ae1b45d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef2d721b7d276811ff1da16e5522d610d1f22e22ca78fa5d2ce7b1b803ff655e,PodSandboxId:f2093ca047b64c97dd8e41a53f8aceb05222562d1ed159915625cc373c2e578f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722216734184544509,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k7k6j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 18fe5ccd-46a0-4197-a687-af0fca1f518d,},Annotations:map[string]string{io.kubernetes.container.hash: a1fc7fcb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcee73846b860be33849e164a556e11985b5265784ca290324b297140699a1de,PodSandboxId:3c5353bc3d71912460125f54eb2dbaa41c60cc780529c947fc4589fbda64c02a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722216714494739513,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d046bb15b263685df44f0950ef760
0a,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed1f7e7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a841bfa674c596af8d9a8081805f72f20e4bbd11af6323a38a626a429158f2b4,PodSandboxId:89c607e32a889420acea1fd60a82823ebaac89b77db23a2a07cadee800264715,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722216714493846493,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46e732ea87b9c252d0
0d262efc8b3fe8,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded106b4ad30b0a5a3e4f673f331c2c718da977fbbab17cf9305ea41e88a02fd,PodSandboxId:b3524d640c8536a390a5adf97d64a96b386284af9d0b0b45095475f0dfc63dc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722216714482338978,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcff5185417c81d7d28fd554089e4bd9,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9289f8f5185e0bd4b1bf3bc7dc0c588f0eb55fb5ce88ba34069936ab9877ab8,PodSandboxId:1ec80da5a502788fccad616b49c4e3e655ebf6a622d390c2e7af479374bb4e83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722216714386150424,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-060411,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a268c1ff6ce2ff894eb8597f240e527e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 65e98503,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b05c23a-38f9-4450-aa67-c347e4cd62ff name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4df6a6ff674dd       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   405ee6e785e79       busybox-fc5497c4f-lfmwp
	ae351cb3c920c       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   e0f07bc4f18a7       kindnet-8csbb
	1ba2a2fb41f03       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   67b79e167c379       coredns-7db6d8ff4d-mnz72
	3dd0c12ffb6b7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   7eca6d986748f       storage-provisioner
	51ce34270fcd5       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   30c77960c991c       kube-proxy-k7k6j
	299e426be4a55       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   61f6c9eb7c53d       etcd-multinode-060411
	e73231afd5e92       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   d2683e5e7e5d6       kube-controller-manager-multinode-060411
	6add5858278eb       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   7e5ba26a31f11       kube-scheduler-multinode-060411
	bff07df0b8693       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   de0369d916d14       kube-apiserver-multinode-060411
	bf650ce9fd8f2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   4fb572bd76789       busybox-fc5497c4f-lfmwp
	18c694585ad58       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   61f6ff5d6d8d1       coredns-7db6d8ff4d-mnz72
	f9d2ae5c2528f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   8d777617fd166       storage-provisioner
	a309347431949       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   bf7dd7f13d304       kindnet-8csbb
	ef2d721b7d276       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   f2093ca047b64       kube-proxy-k7k6j
	bcee73846b860       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   3c5353bc3d719       etcd-multinode-060411
	a841bfa674c59       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      11 minutes ago      Exited              kube-controller-manager   0                   89c607e32a889       kube-controller-manager-multinode-060411
	ded106b4ad30b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      11 minutes ago      Exited              kube-scheduler            0                   b3524d640c853       kube-scheduler-multinode-060411
	c9289f8f5185e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      11 minutes ago      Exited              kube-apiserver            0                   1ec80da5a5027       kube-apiserver-multinode-060411
	
	
	==> coredns [18c694585ad583e71d9bf791d5e8265f1ad31313be119f7a0fc626f0424b0e54] <==
	[INFO] 10.244.1.2:50982 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001791769s
	[INFO] 10.244.1.2:59436 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135076s
	[INFO] 10.244.1.2:41003 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156762s
	[INFO] 10.244.1.2:35882 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001348049s
	[INFO] 10.244.1.2:59947 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097365s
	[INFO] 10.244.1.2:41222 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097729s
	[INFO] 10.244.1.2:37489 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093826s
	[INFO] 10.244.0.3:48014 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146981s
	[INFO] 10.244.0.3:36554 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144887s
	[INFO] 10.244.0.3:53982 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061017s
	[INFO] 10.244.0.3:33894 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067377s
	[INFO] 10.244.1.2:54271 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125176s
	[INFO] 10.244.1.2:45884 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000208461s
	[INFO] 10.244.1.2:34031 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142101s
	[INFO] 10.244.1.2:38095 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108016s
	[INFO] 10.244.0.3:39252 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114871s
	[INFO] 10.244.0.3:57701 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011446s
	[INFO] 10.244.0.3:39879 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091184s
	[INFO] 10.244.0.3:44400 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000083292s
	[INFO] 10.244.1.2:52519 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209241s
	[INFO] 10.244.1.2:53110 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087666s
	[INFO] 10.244.1.2:36160 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000085603s
	[INFO] 10.244.1.2:52796 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085467s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [1ba2a2fb41f03c57c8bad8e8b905876fd3f895b7bd308a6e1b679d57f6e2e4ae] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45320 - 41229 "HINFO IN 4713983434540112245.3086507766231972451. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018640909s
	
	
	==> describe nodes <==
	Name:               multinode-060411
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-060411
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=multinode-060411
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T01_32_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:31:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-060411
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:43:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:38:56 +0000   Mon, 29 Jul 2024 01:31:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:38:56 +0000   Mon, 29 Jul 2024 01:31:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:38:56 +0000   Mon, 29 Jul 2024 01:31:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:38:56 +0000   Mon, 29 Jul 2024 01:32:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.140
	  Hostname:    multinode-060411
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5b41fcf44c544a95a0b0c5c93894c9e5
	  System UUID:                5b41fcf4-4c54-4a95-a0b0-c5c93894c9e5
	  Boot ID:                    374b4634-fa11-4285-a07e-7da972ab5925
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lfmwp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m41s
	  kube-system                 coredns-7db6d8ff4d-mnz72                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-060411                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-8csbb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-060411             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-060411    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-k7k6j                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-060411             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node multinode-060411 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node multinode-060411 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node multinode-060411 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node multinode-060411 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node multinode-060411 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node multinode-060411 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-060411 event: Registered Node multinode-060411 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-060411 status is now: NodeReady
	  Normal  Starting                 4m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m12s (x8 over 4m12s)  kubelet          Node multinode-060411 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m12s (x8 over 4m12s)  kubelet          Node multinode-060411 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m12s (x7 over 4m12s)  kubelet          Node multinode-060411 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m55s                  node-controller  Node multinode-060411 event: Registered Node multinode-060411 in Controller
	
	
	Name:               multinode-060411-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-060411-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=multinode-060411
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T01_39_38_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:39:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-060411-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:40:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 01:40:09 +0000   Mon, 29 Jul 2024 01:41:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 01:40:09 +0000   Mon, 29 Jul 2024 01:41:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 01:40:09 +0000   Mon, 29 Jul 2024 01:41:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 01:40:09 +0000   Mon, 29 Jul 2024 01:41:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.233
	  Hostname:    multinode-060411-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 96fba407efd741e89bdfe057767df496
	  System UUID:                96fba407-efd7-41e8-9bdf-e057767df496
	  Boot ID:                    97542112-0e7f-4f39-981e-46c37d6d4d97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8n5zk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kindnet-4k724              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-ck46f           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m21s                  kube-proxy       
	  Normal  Starting                 9m58s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-060411-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-060411-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-060411-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m44s                  kubelet          Node multinode-060411-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m26s (x2 over 3m26s)  kubelet          Node multinode-060411-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m26s (x2 over 3m26s)  kubelet          Node multinode-060411-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m26s (x2 over 3m26s)  kubelet          Node multinode-060411-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m6s                   kubelet          Node multinode-060411-m02 status is now: NodeReady
	  Normal  NodeNotReady             105s                   node-controller  Node multinode-060411-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.063269] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.171605] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.153875] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.291425] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.147932] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.446261] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.061607] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.002945] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.085650] kauditd_printk_skb: 69 callbacks suppressed
	[Jul29 01:32] systemd-fstab-generator[1476]: Ignoring "noauto" option for root device
	[  +0.131408] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.164879] kauditd_printk_skb: 56 callbacks suppressed
	[Jul29 01:33] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 01:38] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.141196] systemd-fstab-generator[2806]: Ignoring "noauto" option for root device
	[  +0.177963] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.143071] systemd-fstab-generator[2832]: Ignoring "noauto" option for root device
	[  +0.316504] systemd-fstab-generator[2860]: Ignoring "noauto" option for root device
	[  +8.008418] systemd-fstab-generator[2958]: Ignoring "noauto" option for root device
	[  +0.084645] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.580218] systemd-fstab-generator[3080]: Ignoring "noauto" option for root device
	[  +5.684080] kauditd_printk_skb: 74 callbacks suppressed
	[Jul29 01:39] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.173500] systemd-fstab-generator[3912]: Ignoring "noauto" option for root device
	[ +17.575057] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [299e426be4a5586af1a511622ce44a07ee065f9ac07dadee9a2d3975b4ceaeb9] <==
	{"level":"info","ts":"2024-07-29T01:38:53.578387Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T01:38:53.579677Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T01:38:53.579707Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T01:38:53.578683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac switched to configuration voters=(15657868212029965228)"}
	{"level":"info","ts":"2024-07-29T01:38:53.579893Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","added-peer-id":"d94bec2e0ded43ac","added-peer-peer-urls":["https://192.168.39.140:2380"]}
	{"level":"info","ts":"2024-07-29T01:38:53.580083Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:38:53.580159Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:38:53.581615Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d94bec2e0ded43ac","initial-advertise-peer-urls":["https://192.168.39.140:2380"],"listen-peer-urls":["https://192.168.39.140:2380"],"advertise-client-urls":["https://192.168.39.140:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.140:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T01:38:53.583063Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T01:38:53.57877Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-07-29T01:38:53.583253Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-07-29T01:38:55.256722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T01:38:55.256789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T01:38:55.256841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgPreVoteResp from d94bec2e0ded43ac at term 2"}
	{"level":"info","ts":"2024-07-29T01:38:55.256857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T01:38:55.256863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgVoteResp from d94bec2e0ded43ac at term 3"}
	{"level":"info","ts":"2024-07-29T01:38:55.256871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became leader at term 3"}
	{"level":"info","ts":"2024-07-29T01:38:55.256881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d94bec2e0ded43ac elected leader d94bec2e0ded43ac at term 3"}
	{"level":"info","ts":"2024-07-29T01:38:55.262067Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d94bec2e0ded43ac","local-member-attributes":"{Name:multinode-060411 ClientURLs:[https://192.168.39.140:2379]}","request-path":"/0/members/d94bec2e0ded43ac/attributes","cluster-id":"e5cf977c4e262fb4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T01:38:55.262113Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T01:38:55.262433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T01:38:55.262536Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T01:38:55.26256Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T01:38:55.264242Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T01:38:55.264242Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.140:2379"}
	
	
	==> etcd [bcee73846b860be33849e164a556e11985b5265784ca290324b297140699a1de] <==
	{"level":"info","ts":"2024-07-29T01:31:55.611475Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T01:31:55.613104Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T01:31:55.616474Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:31:55.616793Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:31:55.616846Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:31:55.619332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.140:2379"}
	{"level":"warn","ts":"2024-07-29T01:33:00.438557Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.222935ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4876431909037710045 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:43ac90fc1d929adc>","response":"size:42"}
	{"level":"info","ts":"2024-07-29T01:33:00.439049Z","caller":"traceutil/trace.go:171","msg":"trace[1746502634] linearizableReadLoop","detail":"{readStateIndex:516; appliedIndex:515; }","duration":"230.699008ms","start":"2024-07-29T01:33:00.208329Z","end":"2024-07-29T01:33:00.439028Z","steps":["trace[1746502634] 'read index received'  (duration: 83.809238ms)","trace[1746502634] 'applied index is now lower than readState.Index'  (duration: 146.888097ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T01:33:00.439376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.025656ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-060411-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-29T01:33:00.439424Z","caller":"traceutil/trace.go:171","msg":"trace[171258326] range","detail":"{range_begin:/registry/minions/multinode-060411-m02; range_end:; response_count:1; response_revision:497; }","duration":"231.10678ms","start":"2024-07-29T01:33:00.208306Z","end":"2024-07-29T01:33:00.439412Z","steps":["trace[171258326] 'agreement among raft nodes before linearized reading'  (duration: 231.026391ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T01:33:00.439068Z","caller":"traceutil/trace.go:171","msg":"trace[14796823] transaction","detail":"{read_only:false; response_revision:497; number_of_response:1; }","duration":"148.243876ms","start":"2024-07-29T01:33:00.290751Z","end":"2024-07-29T01:33:00.438995Z","steps":["trace[14796823] 'process raft request'  (duration: 148.114207ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T01:33:55.435097Z","caller":"traceutil/trace.go:171","msg":"trace[1096876734] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"215.645835ms","start":"2024-07-29T01:33:55.219405Z","end":"2024-07-29T01:33:55.43505Z","steps":["trace[1096876734] 'process raft request'  (duration: 133.658793ms)","trace[1096876734] 'compare'  (duration: 81.827838ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T01:33:55.435768Z","caller":"traceutil/trace.go:171","msg":"trace[663709683] transaction","detail":"{read_only:false; response_revision:632; number_of_response:1; }","duration":"184.745614ms","start":"2024-07-29T01:33:55.251012Z","end":"2024-07-29T01:33:55.435757Z","steps":["trace[663709683] 'process raft request'  (duration: 184.45659ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T01:34:02.604367Z","caller":"traceutil/trace.go:171","msg":"trace[1638847618] transaction","detail":"{read_only:false; response_revision:675; number_of_response:1; }","duration":"227.079893ms","start":"2024-07-29T01:34:02.377272Z","end":"2024-07-29T01:34:02.604352Z","steps":["trace[1638847618] 'process raft request'  (duration: 226.773508ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T01:37:10.512704Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T01:37:10.512877Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-060411","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.140:2380"],"advertise-client-urls":["https://192.168.39.140:2379"]}
	{"level":"warn","ts":"2024-07-29T01:37:10.513043Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T01:37:10.513132Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/07/29 01:37:10 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T01:37:10.602611Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.140:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T01:37:10.602652Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.140:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T01:37:10.602737Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d94bec2e0ded43ac","current-leader-member-id":"d94bec2e0ded43ac"}
	{"level":"info","ts":"2024-07-29T01:37:10.605885Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-07-29T01:37:10.606282Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-07-29T01:37:10.606325Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-060411","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.140:2380"],"advertise-client-urls":["https://192.168.39.140:2379"]}
	
	
	==> kernel <==
	 01:43:04 up 11 min,  0 users,  load average: 0.28, 0.22, 0.13
	Linux multinode-060411 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a309347431949a047c73522b7a2b599b2895342273fcdd47644ed42ce01a16b1] <==
	I0729 01:36:28.984751       1 main.go:322] Node multinode-060411-m03 has CIDR [10.244.3.0/24] 
	I0729 01:36:38.981826       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:36:38.981900       1 main.go:299] handling current node
	I0729 01:36:38.981918       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:36:38.981924       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:36:38.982159       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0729 01:36:38.982209       1 main.go:322] Node multinode-060411-m03 has CIDR [10.244.3.0/24] 
	I0729 01:36:48.984503       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:36:48.984627       1 main.go:299] handling current node
	I0729 01:36:48.984660       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:36:48.984679       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:36:48.984888       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0729 01:36:48.984918       1 main.go:322] Node multinode-060411-m03 has CIDR [10.244.3.0/24] 
	I0729 01:36:58.981829       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:36:58.982079       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:36:58.982243       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0729 01:36:58.982270       1 main.go:322] Node multinode-060411-m03 has CIDR [10.244.3.0/24] 
	I0729 01:36:58.982382       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:36:58.982452       1 main.go:299] handling current node
	I0729 01:37:08.984567       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:37:08.984634       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:37:08.984824       1 main.go:295] Handling node with IPs: map[192.168.39.190:{}]
	I0729 01:37:08.984846       1 main.go:322] Node multinode-060411-m03 has CIDR [10.244.3.0/24] 
	I0729 01:37:08.984911       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:37:08.984934       1 main.go:299] handling current node
	
	
	==> kindnet [ae351cb3c920c2080c10a102d162cdfc9004a93dfb6bb88c4d8fddf893b0d442] <==
	I0729 01:41:59.176201       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:42:09.183291       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:42:09.183397       1 main.go:299] handling current node
	I0729 01:42:09.183435       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:42:09.183459       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:42:19.180709       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:42:19.180848       1 main.go:299] handling current node
	I0729 01:42:19.180884       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:42:19.180905       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:42:29.176745       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:42:29.176886       1 main.go:299] handling current node
	I0729 01:42:29.176914       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:42:29.176934       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:42:39.186103       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:42:39.186308       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:42:39.186501       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:42:39.186545       1 main.go:299] handling current node
	I0729 01:42:49.180274       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:42:49.180402       1 main.go:299] handling current node
	I0729 01:42:49.180432       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:42:49.180451       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	I0729 01:42:59.176549       1 main.go:295] Handling node with IPs: map[192.168.39.140:{}]
	I0729 01:42:59.176607       1 main.go:299] handling current node
	I0729 01:42:59.176629       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0729 01:42:59.176634       1 main.go:322] Node multinode-060411-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [bff07df0b86935ecddcd897c463f867356e3b9b3c0e3efbd5c2569ca981c25b4] <==
	I0729 01:38:56.538580       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0729 01:38:56.538632       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0729 01:38:56.540125       1 aggregator.go:165] initial CRD sync complete...
	I0729 01:38:56.540170       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 01:38:56.540176       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 01:38:56.570683       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 01:38:56.571125       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 01:38:56.571157       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 01:38:56.575700       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 01:38:56.576561       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 01:38:56.577034       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 01:38:56.582787       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 01:38:56.636874       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 01:38:56.640285       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 01:38:56.640397       1 policy_source.go:224] refreshing policies
	I0729 01:38:56.641921       1 cache.go:39] Caches are synced for autoregister controller
	I0729 01:38:56.642283       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 01:38:57.482837       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 01:38:58.880838       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 01:38:59.075212       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 01:38:59.105938       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 01:38:59.206732       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 01:38:59.213753       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 01:39:09.794128       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 01:39:09.841839       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [c9289f8f5185e0bd4b1bf3bc7dc0c588f0eb55fb5ce88ba34069936ab9877ab8] <==
	I0729 01:37:10.533692       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0729 01:37:10.533742       1 controller.go:157] Shutting down quota evaluator
	I0729 01:37:10.533773       1 controller.go:176] quota evaluator worker shutdown
	W0729 01:37:10.535230       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0729 01:37:10.537473       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0729 01:37:10.537495       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 01:37:10.540225       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	E0729 01:37:10.541448       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.542114       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.542332       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.542561       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0729 01:37:10.542831       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0729 01:37:10.543265       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.543791       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.544183       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.544322       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.544447       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.544590       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 01:37:10.547687       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0729 01:37:10.548064       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0729 01:37:10.548234       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0729 01:37:10.549784       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:37:10.550444       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:37:10.550531       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:37:10.550601       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [a841bfa674c596af8d9a8081805f72f20e4bbd11af6323a38a626a429158f2b4] <==
	I0729 01:32:32.433489       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0729 01:33:00.446409       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-060411-m02\" does not exist"
	I0729 01:33:00.456735       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-060411-m02" podCIDRs=["10.244.1.0/24"]
	I0729 01:33:02.437650       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-060411-m02"
	I0729 01:33:20.863400       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:33:23.077276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.085395ms"
	I0729 01:33:23.116097       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.955172ms"
	I0729 01:33:23.116315       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.264µs"
	I0729 01:33:26.404458       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.115837ms"
	I0729 01:33:26.404605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.802µs"
	I0729 01:33:26.873818       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.126979ms"
	I0729 01:33:26.874205       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="138.343µs"
	I0729 01:33:55.438623       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:33:55.439522       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-060411-m03\" does not exist"
	I0729 01:33:55.456733       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-060411-m03" podCIDRs=["10.244.2.0/24"]
	I0729 01:33:57.456226       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-060411-m03"
	I0729 01:34:16.029088       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:34:43.998612       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:34:45.065561       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-060411-m03\" does not exist"
	I0729 01:34:45.066586       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:34:45.096333       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-060411-m03" podCIDRs=["10.244.3.0/24"]
	I0729 01:35:04.777304       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:35:47.516543       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:35:47.581431       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.42228ms"
	I0729 01:35:47.581880       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.174µs"
	
	
	==> kube-controller-manager [e73231afd5e9252c0fa26b35dd94ded018f615b311998c54a013e637864ee0fb] <==
	I0729 01:39:38.527751       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-060411-m02\" does not exist"
	I0729 01:39:38.542202       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-060411-m02" podCIDRs=["10.244.1.0/24"]
	I0729 01:39:40.413376       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.829µs"
	I0729 01:39:40.441625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.242µs"
	I0729 01:39:40.453716       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.565µs"
	I0729 01:39:40.485762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.889µs"
	I0729 01:39:40.493589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.463µs"
	I0729 01:39:40.496751       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.997µs"
	I0729 01:39:58.395424       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:39:58.417116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="159.228µs"
	I0729 01:39:58.433238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.477µs"
	I0729 01:40:02.804772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.472325ms"
	I0729 01:40:02.805231       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.554µs"
	I0729 01:40:16.574522       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:40:17.843731       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:40:17.844707       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-060411-m03\" does not exist"
	I0729 01:40:17.856277       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-060411-m03" podCIDRs=["10.244.2.0/24"]
	I0729 01:40:37.399536       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m03"
	I0729 01:40:42.801886       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-060411-m02"
	I0729 01:41:19.909730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.803981ms"
	I0729 01:41:19.909894       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.357µs"
	I0729 01:41:29.776519       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2s6xq"
	I0729 01:41:29.801042       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2s6xq"
	I0729 01:41:29.801085       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-w2ncl"
	I0729 01:41:29.821451       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-w2ncl"
	
	
	==> kube-proxy [51ce34270fcd5154c5d87a9dab0259ba588d62fdb1cf925d42a21c2892c06846] <==
	I0729 01:38:58.382212       1 server_linux.go:69] "Using iptables proxy"
	I0729 01:38:58.409703       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.140"]
	I0729 01:38:58.486806       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 01:38:58.486884       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 01:38:58.486900       1 server_linux.go:165] "Using iptables Proxier"
	I0729 01:38:58.491578       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 01:38:58.491738       1 server.go:872] "Version info" version="v1.30.3"
	I0729 01:38:58.491751       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:38:58.493570       1 config.go:192] "Starting service config controller"
	I0729 01:38:58.493592       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 01:38:58.493622       1 config.go:101] "Starting endpoint slice config controller"
	I0729 01:38:58.493628       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 01:38:58.494390       1 config.go:319] "Starting node config controller"
	I0729 01:38:58.494399       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 01:38:58.595110       1 shared_informer.go:320] Caches are synced for node config
	I0729 01:38:58.595141       1 shared_informer.go:320] Caches are synced for service config
	I0729 01:38:58.595151       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ef2d721b7d276811ff1da16e5522d610d1f22e22ca78fa5d2ce7b1b803ff655e] <==
	I0729 01:32:14.499053       1 server_linux.go:69] "Using iptables proxy"
	I0729 01:32:14.512339       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.140"]
	I0729 01:32:14.548888       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 01:32:14.548920       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 01:32:14.548936       1 server_linux.go:165] "Using iptables Proxier"
	I0729 01:32:14.552463       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 01:32:14.552744       1 server.go:872] "Version info" version="v1.30.3"
	I0729 01:32:14.552792       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:32:14.554538       1 config.go:192] "Starting service config controller"
	I0729 01:32:14.554890       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 01:32:14.555029       1 config.go:101] "Starting endpoint slice config controller"
	I0729 01:32:14.555056       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 01:32:14.555847       1 config.go:319] "Starting node config controller"
	I0729 01:32:14.557487       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 01:32:14.655924       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 01:32:14.656042       1 shared_informer.go:320] Caches are synced for service config
	I0729 01:32:14.657609       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6add5858278eb1e54d49eb8e86219474ef87996825592ed5e50f1f39ad079277] <==
	I0729 01:38:54.358752       1 serving.go:380] Generated self-signed cert in-memory
	W0729 01:38:56.528666       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 01:38:56.528818       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 01:38:56.528850       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 01:38:56.528922       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 01:38:56.560631       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 01:38:56.560669       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:38:56.565608       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 01:38:56.565705       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 01:38:56.565732       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 01:38:56.565745       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 01:38:56.666379       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ded106b4ad30b0a5a3e4f673f331c2c718da977fbbab17cf9305ea41e88a02fd] <==
	E0729 01:31:57.162503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 01:31:57.162605       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 01:31:57.162642       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 01:31:57.996608       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 01:31:57.996735       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 01:31:58.038442       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 01:31:58.038470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 01:31:58.182230       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 01:31:58.182417       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 01:31:58.184823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 01:31:58.184894       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 01:31:58.195169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 01:31:58.195209       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 01:31:58.198160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 01:31:58.198280       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 01:31:58.286119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 01:31:58.286218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 01:31:58.312177       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 01:31:58.312229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 01:31:58.399698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 01:31:58.399819       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 01:31:58.529728       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 01:31:58.529822       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 01:32:01.147520       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 01:37:10.507217       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: I0729 01:38:57.430061    3087 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18fe5ccd-46a0-4197-a687-af0fca1f518d-lib-modules\") pod \"kube-proxy-k7k6j\" (UID: \"18fe5ccd-46a0-4197-a687-af0fca1f518d\") " pod="kube-system/kube-proxy-k7k6j"
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: I0729 01:38:57.430081    3087 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fd59518-57af-4a69-8697-f7fbb6a51b5e-xtables-lock\") pod \"kindnet-8csbb\" (UID: \"6fd59518-57af-4a69-8697-f7fbb6a51b5e\") " pod="kube-system/kindnet-8csbb"
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: I0729 01:38:57.430095    3087 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/83dec14c-5f93-4dee-bb62-52cae06307f7-tmp\") pod \"storage-provisioner\" (UID: \"83dec14c-5f93-4dee-bb62-52cae06307f7\") " pod="kube-system/storage-provisioner"
	Jul 29 01:38:57 multinode-060411 kubelet[3087]: I0729 01:38:57.430119    3087 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6fd59518-57af-4a69-8697-f7fbb6a51b5e-cni-cfg\") pod \"kindnet-8csbb\" (UID: \"6fd59518-57af-4a69-8697-f7fbb6a51b5e\") " pod="kube-system/kindnet-8csbb"
	Jul 29 01:39:07 multinode-060411 kubelet[3087]: I0729 01:39:07.381434    3087 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 29 01:39:52 multinode-060411 kubelet[3087]: E0729 01:39:52.444732    3087 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:39:52 multinode-060411 kubelet[3087]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:39:52 multinode-060411 kubelet[3087]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:39:52 multinode-060411 kubelet[3087]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:39:52 multinode-060411 kubelet[3087]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:40:52 multinode-060411 kubelet[3087]: E0729 01:40:52.445000    3087 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:40:52 multinode-060411 kubelet[3087]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:40:52 multinode-060411 kubelet[3087]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:40:52 multinode-060411 kubelet[3087]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:40:52 multinode-060411 kubelet[3087]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:41:52 multinode-060411 kubelet[3087]: E0729 01:41:52.445712    3087 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:41:52 multinode-060411 kubelet[3087]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:41:52 multinode-060411 kubelet[3087]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:41:52 multinode-060411 kubelet[3087]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:41:52 multinode-060411 kubelet[3087]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 01:42:52 multinode-060411 kubelet[3087]: E0729 01:42:52.446368    3087 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 01:42:52 multinode-060411 kubelet[3087]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 01:42:52 multinode-060411 kubelet[3087]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 01:42:52 multinode-060411 kubelet[3087]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 01:42:52 multinode-060411 kubelet[3087]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 01:43:03.591680   47802 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-9421/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-060411 -n multinode-060411
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-060411 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.30s)

                                                
                                    
x
+
TestPreload (265.71s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-609534 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0729 01:47:23.070866   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-609534 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m0.963696413s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-609534 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-609534 image pull gcr.io/k8s-minikube/busybox: (2.658035732s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-609534
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-609534: (7.286559122s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-609534 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0729 01:51:10.264343   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-609534 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m11.773714657s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-609534 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-07-29 01:51:17.290981998 +0000 UTC m=+3871.857610106
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-609534 -n test-preload-609534
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-609534 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-609534 logs -n 25: (1.054191104s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-060411 ssh -n                                                                 | multinode-060411     | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n multinode-060411 sudo cat                                       | multinode-060411     | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | /home/docker/cp-test_multinode-060411-m03_multinode-060411.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-060411 cp multinode-060411-m03:/home/docker/cp-test.txt                       | multinode-060411     | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m02:/home/docker/cp-test_multinode-060411-m03_multinode-060411-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n                                                                 | multinode-060411     | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | multinode-060411-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-060411 ssh -n multinode-060411-m02 sudo cat                                   | multinode-060411     | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	|         | /home/docker/cp-test_multinode-060411-m03_multinode-060411-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-060411 node stop m03                                                          | multinode-060411     | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:34 UTC |
	| node    | multinode-060411 node start                                                             | multinode-060411     | jenkins | v1.33.1 | 29 Jul 24 01:34 UTC | 29 Jul 24 01:35 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-060411                                                                | multinode-060411     | jenkins | v1.33.1 | 29 Jul 24 01:35 UTC |                     |
	| stop    | -p multinode-060411                                                                     | multinode-060411     | jenkins | v1.33.1 | 29 Jul 24 01:35 UTC |                     |
	| start   | -p multinode-060411                                                                     | multinode-060411     | jenkins | v1.33.1 | 29 Jul 24 01:37 UTC | 29 Jul 24 01:40 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-060411                                                                | multinode-060411     | jenkins | v1.33.1 | 29 Jul 24 01:40 UTC |                     |
	| node    | multinode-060411 node delete                                                            | multinode-060411     | jenkins | v1.33.1 | 29 Jul 24 01:40 UTC | 29 Jul 24 01:40 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-060411 stop                                                                   | multinode-060411     | jenkins | v1.33.1 | 29 Jul 24 01:40 UTC |                     |
	| start   | -p multinode-060411                                                                     | multinode-060411     | jenkins | v1.33.1 | 29 Jul 24 01:43 UTC | 29 Jul 24 01:46 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-060411                                                                | multinode-060411     | jenkins | v1.33.1 | 29 Jul 24 01:46 UTC |                     |
	| start   | -p multinode-060411-m02                                                                 | multinode-060411-m02 | jenkins | v1.33.1 | 29 Jul 24 01:46 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-060411-m03                                                                 | multinode-060411-m03 | jenkins | v1.33.1 | 29 Jul 24 01:46 UTC | 29 Jul 24 01:46 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-060411                                                                 | multinode-060411     | jenkins | v1.33.1 | 29 Jul 24 01:46 UTC |                     |
	| delete  | -p multinode-060411-m03                                                                 | multinode-060411-m03 | jenkins | v1.33.1 | 29 Jul 24 01:46 UTC | 29 Jul 24 01:46 UTC |
	| delete  | -p multinode-060411                                                                     | multinode-060411     | jenkins | v1.33.1 | 29 Jul 24 01:46 UTC | 29 Jul 24 01:46 UTC |
	| start   | -p test-preload-609534                                                                  | test-preload-609534  | jenkins | v1.33.1 | 29 Jul 24 01:46 UTC | 29 Jul 24 01:49 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-609534 image pull                                                          | test-preload-609534  | jenkins | v1.33.1 | 29 Jul 24 01:49 UTC | 29 Jul 24 01:49 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-609534                                                                  | test-preload-609534  | jenkins | v1.33.1 | 29 Jul 24 01:49 UTC | 29 Jul 24 01:50 UTC |
	| start   | -p test-preload-609534                                                                  | test-preload-609534  | jenkins | v1.33.1 | 29 Jul 24 01:50 UTC | 29 Jul 24 01:51 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-609534 image list                                                          | test-preload-609534  | jenkins | v1.33.1 | 29 Jul 24 01:51 UTC | 29 Jul 24 01:51 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 01:50:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 01:50:05.343714   50503 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:50:05.343837   50503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:50:05.343845   50503 out.go:304] Setting ErrFile to fd 2...
	I0729 01:50:05.343852   50503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:50:05.344065   50503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:50:05.344588   50503 out.go:298] Setting JSON to false
	I0729 01:50:05.345488   50503 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5551,"bootTime":1722212254,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 01:50:05.345555   50503 start.go:139] virtualization: kvm guest
	I0729 01:50:05.347914   50503 out.go:177] * [test-preload-609534] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 01:50:05.349919   50503 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 01:50:05.349942   50503 notify.go:220] Checking for updates...
	I0729 01:50:05.352974   50503 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 01:50:05.354642   50503 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:50:05.356353   50503 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:50:05.358019   50503 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 01:50:05.359607   50503 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 01:50:05.361473   50503 config.go:182] Loaded profile config "test-preload-609534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0729 01:50:05.361856   50503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:50:05.361936   50503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:50:05.377276   50503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33141
	I0729 01:50:05.377725   50503 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:50:05.378325   50503 main.go:141] libmachine: Using API Version  1
	I0729 01:50:05.378350   50503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:50:05.378703   50503 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:50:05.378946   50503 main.go:141] libmachine: (test-preload-609534) Calling .DriverName
	I0729 01:50:05.380781   50503 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 01:50:05.382212   50503 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 01:50:05.382496   50503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:50:05.382546   50503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:50:05.397038   50503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36203
	I0729 01:50:05.397459   50503 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:50:05.398004   50503 main.go:141] libmachine: Using API Version  1
	I0729 01:50:05.398024   50503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:50:05.398309   50503 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:50:05.398496   50503 main.go:141] libmachine: (test-preload-609534) Calling .DriverName
	I0729 01:50:05.432623   50503 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 01:50:05.433872   50503 start.go:297] selected driver: kvm2
	I0729 01:50:05.433890   50503 start.go:901] validating driver "kvm2" against &{Name:test-preload-609534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-609534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:50:05.433979   50503 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 01:50:05.434627   50503 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:50:05.434690   50503 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 01:50:05.449927   50503 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 01:50:05.450308   50503 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 01:50:05.450338   50503 cni.go:84] Creating CNI manager for ""
	I0729 01:50:05.450346   50503 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 01:50:05.450400   50503 start.go:340] cluster config:
	{Name:test-preload-609534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-609534 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:50:05.450504   50503 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:50:05.452506   50503 out.go:177] * Starting "test-preload-609534" primary control-plane node in "test-preload-609534" cluster
	I0729 01:50:05.453750   50503 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0729 01:50:06.007189   50503 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0729 01:50:06.007230   50503 cache.go:56] Caching tarball of preloaded images
	I0729 01:50:06.007413   50503 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0729 01:50:06.009543   50503 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0729 01:50:06.010974   50503 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0729 01:50:06.122460   50503 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0729 01:50:18.470610   50503 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0729 01:50:18.470714   50503 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0729 01:50:19.312368   50503 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0729 01:50:19.312489   50503 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/test-preload-609534/config.json ...
	I0729 01:50:19.312732   50503 start.go:360] acquireMachinesLock for test-preload-609534: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 01:50:19.312802   50503 start.go:364] duration metric: took 46.864µs to acquireMachinesLock for "test-preload-609534"
	I0729 01:50:19.312825   50503 start.go:96] Skipping create...Using existing machine configuration
	I0729 01:50:19.312833   50503 fix.go:54] fixHost starting: 
	I0729 01:50:19.313157   50503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:50:19.313198   50503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:50:19.327763   50503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42679
	I0729 01:50:19.328183   50503 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:50:19.328654   50503 main.go:141] libmachine: Using API Version  1
	I0729 01:50:19.328671   50503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:50:19.329006   50503 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:50:19.329229   50503 main.go:141] libmachine: (test-preload-609534) Calling .DriverName
	I0729 01:50:19.329452   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetState
	I0729 01:50:19.331039   50503 fix.go:112] recreateIfNeeded on test-preload-609534: state=Stopped err=<nil>
	I0729 01:50:19.331101   50503 main.go:141] libmachine: (test-preload-609534) Calling .DriverName
	W0729 01:50:19.331259   50503 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 01:50:19.333196   50503 out.go:177] * Restarting existing kvm2 VM for "test-preload-609534" ...
	I0729 01:50:19.334266   50503 main.go:141] libmachine: (test-preload-609534) Calling .Start
	I0729 01:50:19.334438   50503 main.go:141] libmachine: (test-preload-609534) Ensuring networks are active...
	I0729 01:50:19.335115   50503 main.go:141] libmachine: (test-preload-609534) Ensuring network default is active
	I0729 01:50:19.335369   50503 main.go:141] libmachine: (test-preload-609534) Ensuring network mk-test-preload-609534 is active
	I0729 01:50:19.335766   50503 main.go:141] libmachine: (test-preload-609534) Getting domain xml...
	I0729 01:50:19.336513   50503 main.go:141] libmachine: (test-preload-609534) Creating domain...
	I0729 01:50:20.530334   50503 main.go:141] libmachine: (test-preload-609534) Waiting to get IP...
	I0729 01:50:20.531193   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:20.531642   50503 main.go:141] libmachine: (test-preload-609534) DBG | unable to find current IP address of domain test-preload-609534 in network mk-test-preload-609534
	I0729 01:50:20.531720   50503 main.go:141] libmachine: (test-preload-609534) DBG | I0729 01:50:20.531619   50585 retry.go:31] will retry after 257.792175ms: waiting for machine to come up
	I0729 01:50:20.791458   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:20.792011   50503 main.go:141] libmachine: (test-preload-609534) DBG | unable to find current IP address of domain test-preload-609534 in network mk-test-preload-609534
	I0729 01:50:20.792033   50503 main.go:141] libmachine: (test-preload-609534) DBG | I0729 01:50:20.791971   50585 retry.go:31] will retry after 277.790903ms: waiting for machine to come up
	I0729 01:50:21.071659   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:21.072100   50503 main.go:141] libmachine: (test-preload-609534) DBG | unable to find current IP address of domain test-preload-609534 in network mk-test-preload-609534
	I0729 01:50:21.072133   50503 main.go:141] libmachine: (test-preload-609534) DBG | I0729 01:50:21.072042   50585 retry.go:31] will retry after 470.566332ms: waiting for machine to come up
	I0729 01:50:21.544742   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:21.545224   50503 main.go:141] libmachine: (test-preload-609534) DBG | unable to find current IP address of domain test-preload-609534 in network mk-test-preload-609534
	I0729 01:50:21.545255   50503 main.go:141] libmachine: (test-preload-609534) DBG | I0729 01:50:21.545171   50585 retry.go:31] will retry after 509.104317ms: waiting for machine to come up
	I0729 01:50:22.055795   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:22.056262   50503 main.go:141] libmachine: (test-preload-609534) DBG | unable to find current IP address of domain test-preload-609534 in network mk-test-preload-609534
	I0729 01:50:22.056300   50503 main.go:141] libmachine: (test-preload-609534) DBG | I0729 01:50:22.056220   50585 retry.go:31] will retry after 536.938933ms: waiting for machine to come up
	I0729 01:50:22.595082   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:22.595654   50503 main.go:141] libmachine: (test-preload-609534) DBG | unable to find current IP address of domain test-preload-609534 in network mk-test-preload-609534
	I0729 01:50:22.595682   50503 main.go:141] libmachine: (test-preload-609534) DBG | I0729 01:50:22.595602   50585 retry.go:31] will retry after 676.550543ms: waiting for machine to come up
	I0729 01:50:23.273496   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:23.273989   50503 main.go:141] libmachine: (test-preload-609534) DBG | unable to find current IP address of domain test-preload-609534 in network mk-test-preload-609534
	I0729 01:50:23.274018   50503 main.go:141] libmachine: (test-preload-609534) DBG | I0729 01:50:23.273939   50585 retry.go:31] will retry after 826.796537ms: waiting for machine to come up
	I0729 01:50:24.102041   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:24.102452   50503 main.go:141] libmachine: (test-preload-609534) DBG | unable to find current IP address of domain test-preload-609534 in network mk-test-preload-609534
	I0729 01:50:24.102486   50503 main.go:141] libmachine: (test-preload-609534) DBG | I0729 01:50:24.102396   50585 retry.go:31] will retry after 1.180778136s: waiting for machine to come up
	I0729 01:50:25.285216   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:25.285678   50503 main.go:141] libmachine: (test-preload-609534) DBG | unable to find current IP address of domain test-preload-609534 in network mk-test-preload-609534
	I0729 01:50:25.285706   50503 main.go:141] libmachine: (test-preload-609534) DBG | I0729 01:50:25.285625   50585 retry.go:31] will retry after 1.662751807s: waiting for machine to come up
	I0729 01:50:26.950301   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:26.950641   50503 main.go:141] libmachine: (test-preload-609534) DBG | unable to find current IP address of domain test-preload-609534 in network mk-test-preload-609534
	I0729 01:50:26.950665   50503 main.go:141] libmachine: (test-preload-609534) DBG | I0729 01:50:26.950612   50585 retry.go:31] will retry after 2.236933084s: waiting for machine to come up
	I0729 01:50:29.190112   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:29.190655   50503 main.go:141] libmachine: (test-preload-609534) DBG | unable to find current IP address of domain test-preload-609534 in network mk-test-preload-609534
	I0729 01:50:29.190690   50503 main.go:141] libmachine: (test-preload-609534) DBG | I0729 01:50:29.190586   50585 retry.go:31] will retry after 2.62966159s: waiting for machine to come up
	I0729 01:50:31.823192   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:31.823612   50503 main.go:141] libmachine: (test-preload-609534) DBG | unable to find current IP address of domain test-preload-609534 in network mk-test-preload-609534
	I0729 01:50:31.823658   50503 main.go:141] libmachine: (test-preload-609534) DBG | I0729 01:50:31.823594   50585 retry.go:31] will retry after 3.293689242s: waiting for machine to come up
	I0729 01:50:35.119393   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:35.119783   50503 main.go:141] libmachine: (test-preload-609534) DBG | unable to find current IP address of domain test-preload-609534 in network mk-test-preload-609534
	I0729 01:50:35.119838   50503 main.go:141] libmachine: (test-preload-609534) DBG | I0729 01:50:35.119744   50585 retry.go:31] will retry after 3.390679604s: waiting for machine to come up
	I0729 01:50:38.514286   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:38.514771   50503 main.go:141] libmachine: (test-preload-609534) Found IP for machine: 192.168.39.21
	I0729 01:50:38.514793   50503 main.go:141] libmachine: (test-preload-609534) Reserving static IP address...
	I0729 01:50:38.514809   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has current primary IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:38.515243   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "test-preload-609534", mac: "52:54:00:e6:0c:47", ip: "192.168.39.21"} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:50:38.515277   50503 main.go:141] libmachine: (test-preload-609534) DBG | skip adding static IP to network mk-test-preload-609534 - found existing host DHCP lease matching {name: "test-preload-609534", mac: "52:54:00:e6:0c:47", ip: "192.168.39.21"}
	I0729 01:50:38.515291   50503 main.go:141] libmachine: (test-preload-609534) Reserved static IP address: 192.168.39.21
	I0729 01:50:38.515309   50503 main.go:141] libmachine: (test-preload-609534) Waiting for SSH to be available...
	I0729 01:50:38.515325   50503 main.go:141] libmachine: (test-preload-609534) DBG | Getting to WaitForSSH function...
	I0729 01:50:38.517284   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:38.517669   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:0c:47", ip: ""} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:50:38.517698   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:38.517860   50503 main.go:141] libmachine: (test-preload-609534) DBG | Using SSH client type: external
	I0729 01:50:38.517875   50503 main.go:141] libmachine: (test-preload-609534) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/test-preload-609534/id_rsa (-rw-------)
	I0729 01:50:38.517915   50503 main.go:141] libmachine: (test-preload-609534) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-9421/.minikube/machines/test-preload-609534/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 01:50:38.517931   50503 main.go:141] libmachine: (test-preload-609534) DBG | About to run SSH command:
	I0729 01:50:38.517947   50503 main.go:141] libmachine: (test-preload-609534) DBG | exit 0
	I0729 01:50:38.639243   50503 main.go:141] libmachine: (test-preload-609534) DBG | SSH cmd err, output: <nil>: 
	I0729 01:50:38.639596   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetConfigRaw
	I0729 01:50:38.640275   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetIP
	I0729 01:50:38.642853   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:38.643339   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:0c:47", ip: ""} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:50:38.643390   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:38.643683   50503 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/test-preload-609534/config.json ...
	I0729 01:50:38.643915   50503 machine.go:94] provisionDockerMachine start ...
	I0729 01:50:38.643937   50503 main.go:141] libmachine: (test-preload-609534) Calling .DriverName
	I0729 01:50:38.644144   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHHostname
	I0729 01:50:38.646424   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:38.646768   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:0c:47", ip: ""} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:50:38.646791   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:38.646937   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHPort
	I0729 01:50:38.647127   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHKeyPath
	I0729 01:50:38.647308   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHKeyPath
	I0729 01:50:38.647450   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHUsername
	I0729 01:50:38.647588   50503 main.go:141] libmachine: Using SSH client type: native
	I0729 01:50:38.647771   50503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0729 01:50:38.647782   50503 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 01:50:38.747372   50503 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 01:50:38.747403   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetMachineName
	I0729 01:50:38.747637   50503 buildroot.go:166] provisioning hostname "test-preload-609534"
	I0729 01:50:38.747660   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetMachineName
	I0729 01:50:38.747820   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHHostname
	I0729 01:50:38.750651   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:38.751116   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:0c:47", ip: ""} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:50:38.751144   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:38.751289   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHPort
	I0729 01:50:38.751473   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHKeyPath
	I0729 01:50:38.751647   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHKeyPath
	I0729 01:50:38.751787   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHUsername
	I0729 01:50:38.751963   50503 main.go:141] libmachine: Using SSH client type: native
	I0729 01:50:38.752124   50503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0729 01:50:38.752142   50503 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-609534 && echo "test-preload-609534" | sudo tee /etc/hostname
	I0729 01:50:38.866275   50503 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-609534
	
	I0729 01:50:38.866298   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHHostname
	I0729 01:50:38.869067   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:38.869426   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:0c:47", ip: ""} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:50:38.869456   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:38.869622   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHPort
	I0729 01:50:38.869842   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHKeyPath
	I0729 01:50:38.870026   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHKeyPath
	I0729 01:50:38.870199   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHUsername
	I0729 01:50:38.870322   50503 main.go:141] libmachine: Using SSH client type: native
	I0729 01:50:38.870513   50503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0729 01:50:38.870531   50503 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-609534' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-609534/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-609534' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 01:50:38.976605   50503 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:50:38.976639   50503 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 01:50:38.976660   50503 buildroot.go:174] setting up certificates
	I0729 01:50:38.976669   50503 provision.go:84] configureAuth start
	I0729 01:50:38.976680   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetMachineName
	I0729 01:50:38.976967   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetIP
	I0729 01:50:38.979670   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:38.980009   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:0c:47", ip: ""} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:50:38.980045   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:38.980163   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHHostname
	I0729 01:50:38.982132   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:38.982487   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:0c:47", ip: ""} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:50:38.982515   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:38.982603   50503 provision.go:143] copyHostCerts
	I0729 01:50:38.982677   50503 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 01:50:38.982730   50503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:50:38.982836   50503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 01:50:38.983004   50503 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 01:50:38.983024   50503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:50:38.983084   50503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 01:50:38.983177   50503 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 01:50:38.983187   50503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:50:38.983221   50503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 01:50:38.983304   50503 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.test-preload-609534 san=[127.0.0.1 192.168.39.21 localhost minikube test-preload-609534]
	I0729 01:50:39.130859   50503 provision.go:177] copyRemoteCerts
	I0729 01:50:39.130912   50503 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 01:50:39.130935   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHHostname
	I0729 01:50:39.133321   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:39.133607   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:0c:47", ip: ""} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:50:39.133627   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:39.133775   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHPort
	I0729 01:50:39.133940   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHKeyPath
	I0729 01:50:39.134128   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHUsername
	I0729 01:50:39.134297   50503 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/test-preload-609534/id_rsa Username:docker}
	I0729 01:50:39.213428   50503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 01:50:39.237877   50503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 01:50:39.262498   50503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 01:50:39.286574   50503 provision.go:87] duration metric: took 309.894235ms to configureAuth
	I0729 01:50:39.286617   50503 buildroot.go:189] setting minikube options for container-runtime
	I0729 01:50:39.286810   50503 config.go:182] Loaded profile config "test-preload-609534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0729 01:50:39.286913   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHHostname
	I0729 01:50:39.289361   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:39.289683   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:0c:47", ip: ""} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:50:39.289707   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:39.289872   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHPort
	I0729 01:50:39.290072   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHKeyPath
	I0729 01:50:39.290241   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHKeyPath
	I0729 01:50:39.290386   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHUsername
	I0729 01:50:39.290538   50503 main.go:141] libmachine: Using SSH client type: native
	I0729 01:50:39.290765   50503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0729 01:50:39.290787   50503 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 01:50:39.550381   50503 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 01:50:39.550410   50503 machine.go:97] duration metric: took 906.481031ms to provisionDockerMachine
	I0729 01:50:39.550422   50503 start.go:293] postStartSetup for "test-preload-609534" (driver="kvm2")
	I0729 01:50:39.550432   50503 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 01:50:39.550446   50503 main.go:141] libmachine: (test-preload-609534) Calling .DriverName
	I0729 01:50:39.550732   50503 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 01:50:39.550753   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHHostname
	I0729 01:50:39.553571   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:39.553915   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:0c:47", ip: ""} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:50:39.553949   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:39.554094   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHPort
	I0729 01:50:39.554280   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHKeyPath
	I0729 01:50:39.554433   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHUsername
	I0729 01:50:39.554565   50503 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/test-preload-609534/id_rsa Username:docker}
	I0729 01:50:39.633996   50503 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 01:50:39.638561   50503 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 01:50:39.638588   50503 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 01:50:39.638662   50503 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 01:50:39.638780   50503 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 01:50:39.638953   50503 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 01:50:39.649774   50503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:50:39.679849   50503 start.go:296] duration metric: took 129.416494ms for postStartSetup
	I0729 01:50:39.679885   50503 fix.go:56] duration metric: took 20.367052855s for fixHost
	I0729 01:50:39.679916   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHHostname
	I0729 01:50:39.682614   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:39.682941   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:0c:47", ip: ""} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:50:39.682973   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:39.683158   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHPort
	I0729 01:50:39.683372   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHKeyPath
	I0729 01:50:39.683531   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHKeyPath
	I0729 01:50:39.683706   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHUsername
	I0729 01:50:39.683991   50503 main.go:141] libmachine: Using SSH client type: native
	I0729 01:50:39.684172   50503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.21 22 <nil> <nil>}
	I0729 01:50:39.684184   50503 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 01:50:39.783920   50503 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722217839.760214430
	
	I0729 01:50:39.783957   50503 fix.go:216] guest clock: 1722217839.760214430
	I0729 01:50:39.783966   50503 fix.go:229] Guest: 2024-07-29 01:50:39.76021443 +0000 UTC Remote: 2024-07-29 01:50:39.679888709 +0000 UTC m=+34.368625449 (delta=80.325721ms)
	I0729 01:50:39.784005   50503 fix.go:200] guest clock delta is within tolerance: 80.325721ms
	I0729 01:50:39.784013   50503 start.go:83] releasing machines lock for "test-preload-609534", held for 20.471195237s
	I0729 01:50:39.784044   50503 main.go:141] libmachine: (test-preload-609534) Calling .DriverName
	I0729 01:50:39.784340   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetIP
	I0729 01:50:39.787107   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:39.787494   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:0c:47", ip: ""} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:50:39.787523   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:39.787670   50503 main.go:141] libmachine: (test-preload-609534) Calling .DriverName
	I0729 01:50:39.788219   50503 main.go:141] libmachine: (test-preload-609534) Calling .DriverName
	I0729 01:50:39.788394   50503 main.go:141] libmachine: (test-preload-609534) Calling .DriverName
	I0729 01:50:39.788499   50503 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 01:50:39.788527   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHHostname
	I0729 01:50:39.788612   50503 ssh_runner.go:195] Run: cat /version.json
	I0729 01:50:39.788629   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHHostname
	I0729 01:50:39.791532   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:39.791771   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:39.791839   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:0c:47", ip: ""} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:50:39.791865   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:39.791999   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHPort
	I0729 01:50:39.792160   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHKeyPath
	I0729 01:50:39.792259   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:0c:47", ip: ""} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:50:39.792306   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:39.792313   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHUsername
	I0729 01:50:39.792405   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHPort
	I0729 01:50:39.792488   50503 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/test-preload-609534/id_rsa Username:docker}
	I0729 01:50:39.792538   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHKeyPath
	I0729 01:50:39.792664   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHUsername
	I0729 01:50:39.792862   50503 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/test-preload-609534/id_rsa Username:docker}
	I0729 01:50:39.896036   50503 ssh_runner.go:195] Run: systemctl --version
	I0729 01:50:39.902338   50503 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 01:50:40.050627   50503 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 01:50:40.057963   50503 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 01:50:40.058022   50503 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 01:50:40.074632   50503 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 01:50:40.074653   50503 start.go:495] detecting cgroup driver to use...
	I0729 01:50:40.074706   50503 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 01:50:40.093353   50503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 01:50:40.109037   50503 docker.go:217] disabling cri-docker service (if available) ...
	I0729 01:50:40.109108   50503 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 01:50:40.125332   50503 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 01:50:40.142170   50503 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 01:50:40.271500   50503 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 01:50:40.435703   50503 docker.go:233] disabling docker service ...
	I0729 01:50:40.435762   50503 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 01:50:40.450857   50503 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 01:50:40.463632   50503 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 01:50:40.576892   50503 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 01:50:40.693151   50503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 01:50:40.708207   50503 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 01:50:40.727415   50503 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0729 01:50:40.727485   50503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:50:40.737863   50503 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 01:50:40.737932   50503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:50:40.748122   50503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:50:40.758424   50503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:50:40.768441   50503 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 01:50:40.778587   50503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:50:40.788490   50503 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:50:40.805603   50503 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:50:40.815559   50503 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 01:50:40.824527   50503 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 01:50:40.824575   50503 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 01:50:40.836731   50503 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 01:50:40.846447   50503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:50:40.956677   50503 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 01:50:41.093654   50503 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 01:50:41.093730   50503 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 01:50:41.098392   50503 start.go:563] Will wait 60s for crictl version
	I0729 01:50:41.098448   50503 ssh_runner.go:195] Run: which crictl
	I0729 01:50:41.102286   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 01:50:41.140725   50503 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 01:50:41.140822   50503 ssh_runner.go:195] Run: crio --version
	I0729 01:50:41.169151   50503 ssh_runner.go:195] Run: crio --version
	I0729 01:50:41.200101   50503 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0729 01:50:41.201353   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetIP
	I0729 01:50:41.204122   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:41.204503   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:0c:47", ip: ""} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:50:41.204535   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:50:41.204725   50503 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 01:50:41.208894   50503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 01:50:41.221560   50503 kubeadm.go:883] updating cluster {Name:test-preload-609534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-609534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 01:50:41.221700   50503 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0729 01:50:41.221749   50503 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:50:41.257519   50503 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0729 01:50:41.257601   50503 ssh_runner.go:195] Run: which lz4
	I0729 01:50:41.261750   50503 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 01:50:41.266068   50503 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 01:50:41.266101   50503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0729 01:50:42.810165   50503 crio.go:462] duration metric: took 1.548458336s to copy over tarball
	I0729 01:50:42.810235   50503 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 01:50:45.177166   50503 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.366902714s)
	I0729 01:50:45.177198   50503 crio.go:469] duration metric: took 2.367007788s to extract the tarball
	I0729 01:50:45.177209   50503 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 01:50:45.219390   50503 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:50:45.264806   50503 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0729 01:50:45.264832   50503 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 01:50:45.264921   50503 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 01:50:45.264942   50503 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0729 01:50:45.264971   50503 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 01:50:45.264983   50503 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 01:50:45.265007   50503 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 01:50:45.264903   50503 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 01:50:45.265032   50503 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 01:50:45.265016   50503 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 01:50:45.266560   50503 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 01:50:45.266568   50503 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 01:50:45.266573   50503 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 01:50:45.266562   50503 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 01:50:45.266594   50503 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 01:50:45.266600   50503 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 01:50:45.266560   50503 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 01:50:45.266561   50503 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 01:50:45.456171   50503 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0729 01:50:45.486030   50503 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0729 01:50:45.495109   50503 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0729 01:50:45.495149   50503 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 01:50:45.495195   50503 ssh_runner.go:195] Run: which crictl
	I0729 01:50:45.535005   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0729 01:50:45.535103   50503 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0729 01:50:45.535137   50503 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 01:50:45.535171   50503 ssh_runner.go:195] Run: which crictl
	I0729 01:50:45.571462   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0729 01:50:45.571552   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0729 01:50:45.598000   50503 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 01:50:45.605895   50503 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 01:50:45.608062   50503 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 01:50:45.618206   50503 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 01:50:45.620266   50503 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0729 01:50:45.741285   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0729 01:50:45.741301   50503 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0729 01:50:45.741319   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0729 01:50:45.741321   50503 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0729 01:50:45.741353   50503 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0729 01:50:45.741400   50503 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0729 01:50:45.741423   50503 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 01:50:45.741465   50503 ssh_runner.go:195] Run: which crictl
	I0729 01:50:45.741406   50503 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0729 01:50:45.741540   50503 ssh_runner.go:195] Run: which crictl
	I0729 01:50:45.741360   50503 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 01:50:45.741399   50503 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0729 01:50:45.741622   50503 ssh_runner.go:195] Run: which crictl
	I0729 01:50:45.741643   50503 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 01:50:45.741331   50503 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 01:50:45.741683   50503 ssh_runner.go:195] Run: which crictl
	I0729 01:50:45.741703   50503 ssh_runner.go:195] Run: which crictl
	I0729 01:50:45.799364   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0729 01:50:45.799413   50503 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0729 01:50:45.799456   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0729 01:50:45.799481   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0729 01:50:45.799501   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0729 01:50:45.799511   50503 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0729 01:50:45.799556   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 01:50:45.799573   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 01:50:45.923281   50503 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0729 01:50:45.923376   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0729 01:50:45.923424   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 01:50:45.923455   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0729 01:50:45.923380   50503 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0729 01:50:45.923490   50503 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0729 01:50:45.923502   50503 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0729 01:50:45.923524   50503 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0729 01:50:45.923554   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0729 01:50:45.923615   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 01:50:46.046851   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 01:50:46.046900   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0729 01:50:46.046937   50503 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0729 01:50:46.047001   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0729 01:50:46.184871   50503 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 01:50:48.668941   50503 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.745392784s)
	I0729 01:50:48.668982   50503 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7: (2.745408313s)
	I0729 01:50:48.669062   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0729 01:50:48.668983   50503 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0729 01:50:48.669075   50503 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (2.745426651s)
	I0729 01:50:48.669097   50503 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0729 01:50:48.669145   50503 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 01:50:48.669150   50503 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0729 01:50:48.669152   50503 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (2.622268412s)
	I0729 01:50:48.669192   50503 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 01:50:48.669218   50503 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (2.622297791s)
	I0729 01:50:48.669258   50503 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0729 01:50:48.669271   50503 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0729 01:50:48.669270   50503 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (2.622251241s)
	I0729 01:50:48.669298   50503 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0729 01:50:48.669315   50503 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.484416618s)
	I0729 01:50:48.669338   50503 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0729 01:50:48.669377   50503 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 01:50:48.757573   50503 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0729 01:50:48.757680   50503 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0729 01:50:48.757799   50503 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0729 01:50:48.763588   50503 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0729 01:50:48.763634   50503 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0729 01:50:48.763595   50503 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0729 01:50:48.763703   50503 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0729 01:50:49.437997   50503 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0729 01:50:49.438052   50503 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0729 01:50:49.438063   50503 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 01:50:49.438109   50503 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0729 01:50:49.438120   50503 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0729 01:50:51.687257   50503 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.249116757s)
	I0729 01:50:51.687289   50503 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 01:50:51.687311   50503 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 01:50:51.687377   50503 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0729 01:50:52.025245   50503 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 01:50:52.025294   50503 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0729 01:50:52.025348   50503 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0729 01:50:52.470366   50503 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0729 01:50:52.470408   50503 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 01:50:52.470461   50503 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0729 01:50:52.614904   50503 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0729 01:50:52.614952   50503 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0729 01:50:52.615015   50503 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0729 01:50:53.358930   50503 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0729 01:50:53.358985   50503 cache_images.go:123] Successfully loaded all cached images
	I0729 01:50:53.358992   50503 cache_images.go:92] duration metric: took 8.094149709s to LoadCachedImages
	I0729 01:50:53.359008   50503 kubeadm.go:934] updating node { 192.168.39.21 8443 v1.24.4 crio true true} ...
	I0729 01:50:53.359166   50503 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-609534 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-609534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 01:50:53.359254   50503 ssh_runner.go:195] Run: crio config
	I0729 01:50:53.405163   50503 cni.go:84] Creating CNI manager for ""
	I0729 01:50:53.405188   50503 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 01:50:53.405202   50503 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 01:50:53.405223   50503 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.21 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-609534 NodeName:test-preload-609534 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 01:50:53.405361   50503 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-609534"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 01:50:53.405421   50503 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0729 01:50:53.416255   50503 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 01:50:53.416318   50503 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 01:50:53.426225   50503 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0729 01:50:53.442443   50503 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 01:50:53.459108   50503 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0729 01:50:53.476323   50503 ssh_runner.go:195] Run: grep 192.168.39.21	control-plane.minikube.internal$ /etc/hosts
	I0729 01:50:53.480289   50503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 01:50:53.493176   50503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:50:53.610130   50503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:50:53.627246   50503 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/test-preload-609534 for IP: 192.168.39.21
	I0729 01:50:53.627267   50503 certs.go:194] generating shared ca certs ...
	I0729 01:50:53.627282   50503 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:50:53.627412   50503 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 01:50:53.627450   50503 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 01:50:53.627463   50503 certs.go:256] generating profile certs ...
	I0729 01:50:53.627569   50503 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/test-preload-609534/client.key
	I0729 01:50:53.627625   50503 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/test-preload-609534/apiserver.key.026f690d
	I0729 01:50:53.627657   50503 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/test-preload-609534/proxy-client.key
	I0729 01:50:53.627764   50503 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 01:50:53.627816   50503 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 01:50:53.627831   50503 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 01:50:53.627860   50503 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 01:50:53.627883   50503 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 01:50:53.627905   50503 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 01:50:53.627943   50503 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:50:53.628581   50503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 01:50:53.692609   50503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 01:50:53.721375   50503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 01:50:53.757948   50503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 01:50:53.787494   50503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/test-preload-609534/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 01:50:53.816527   50503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/test-preload-609534/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 01:50:53.851466   50503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/test-preload-609534/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 01:50:53.876197   50503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/test-preload-609534/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 01:50:53.900984   50503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 01:50:53.924560   50503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 01:50:53.947794   50503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 01:50:53.971528   50503 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 01:50:53.988601   50503 ssh_runner.go:195] Run: openssl version
	I0729 01:50:53.994522   50503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 01:50:54.006192   50503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:50:54.010831   50503 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:50:54.010896   50503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:50:54.016837   50503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 01:50:54.028672   50503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 01:50:54.039834   50503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 01:50:54.044418   50503 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 01:50:54.044471   50503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 01:50:54.050153   50503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 01:50:54.061412   50503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 01:50:54.072661   50503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 01:50:54.077503   50503 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 01:50:54.077547   50503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 01:50:54.083197   50503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 01:50:54.094186   50503 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 01:50:54.098607   50503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 01:50:54.104583   50503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 01:50:54.110372   50503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 01:50:54.116318   50503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 01:50:54.122093   50503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 01:50:54.128153   50503 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 01:50:54.133974   50503 kubeadm.go:392] StartCluster: {Name:test-preload-609534 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-609534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:50:54.134075   50503 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 01:50:54.134133   50503 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 01:50:54.171815   50503 cri.go:89] found id: ""
	I0729 01:50:54.171892   50503 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 01:50:54.182618   50503 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 01:50:54.182638   50503 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 01:50:54.182679   50503 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 01:50:54.192907   50503 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:50:54.193399   50503 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-609534" does not appear in /home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:50:54.193583   50503 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-9421/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-609534" cluster setting kubeconfig missing "test-preload-609534" context setting]
	I0729 01:50:54.193884   50503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/kubeconfig: {Name:mkfc86149281a82bb07035a854bdc5c590b97078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:50:54.194544   50503 kapi.go:59] client config for test-preload-609534: &rest.Config{Host:"https://192.168.39.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/profiles/test-preload-609534/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/profiles/test-preload-609534/client.key", CAFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 01:50:54.195174   50503 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 01:50:54.205519   50503 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.21
	I0729 01:50:54.205553   50503 kubeadm.go:1160] stopping kube-system containers ...
	I0729 01:50:54.205563   50503 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 01:50:54.205609   50503 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 01:50:54.242286   50503 cri.go:89] found id: ""
	I0729 01:50:54.242360   50503 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 01:50:54.259429   50503 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 01:50:54.270033   50503 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 01:50:54.270055   50503 kubeadm.go:157] found existing configuration files:
	
	I0729 01:50:54.270125   50503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 01:50:54.279606   50503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 01:50:54.279682   50503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 01:50:54.289402   50503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 01:50:54.298709   50503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 01:50:54.298765   50503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 01:50:54.308266   50503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 01:50:54.317421   50503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 01:50:54.317483   50503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 01:50:54.327318   50503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 01:50:54.336894   50503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 01:50:54.336959   50503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 01:50:54.347027   50503 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 01:50:54.359084   50503 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 01:50:54.459338   50503 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 01:50:55.401421   50503 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 01:50:55.677490   50503 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 01:50:55.765387   50503 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 01:50:55.871102   50503 api_server.go:52] waiting for apiserver process to appear ...
	I0729 01:50:55.871198   50503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:50:56.372183   50503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:50:56.871921   50503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:50:56.887763   50503 api_server.go:72] duration metric: took 1.016656249s to wait for apiserver process to appear ...
	I0729 01:50:56.887801   50503 api_server.go:88] waiting for apiserver healthz status ...
	I0729 01:50:56.887822   50503 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0729 01:50:56.888376   50503 api_server.go:269] stopped: https://192.168.39.21:8443/healthz: Get "https://192.168.39.21:8443/healthz": dial tcp 192.168.39.21:8443: connect: connection refused
	I0729 01:50:57.387939   50503 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0729 01:51:00.684527   50503 api_server.go:279] https://192.168.39.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 01:51:00.684565   50503 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 01:51:00.684586   50503 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0729 01:51:00.703354   50503 api_server.go:279] https://192.168.39.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 01:51:00.703385   50503 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 01:51:00.888704   50503 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0729 01:51:00.894868   50503 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 01:51:00.894898   50503 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 01:51:01.388538   50503 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0729 01:51:01.394345   50503 api_server.go:279] https://192.168.39.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 01:51:01.394379   50503 api_server.go:103] status: https://192.168.39.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 01:51:01.887981   50503 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0729 01:51:01.899197   50503 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I0729 01:51:01.915533   50503 api_server.go:141] control plane version: v1.24.4
	I0729 01:51:01.915566   50503 api_server.go:131] duration metric: took 5.027756709s to wait for apiserver health ...
	I0729 01:51:01.915575   50503 cni.go:84] Creating CNI manager for ""
	I0729 01:51:01.915581   50503 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 01:51:01.917316   50503 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 01:51:01.918710   50503 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 01:51:01.942709   50503 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 01:51:01.964803   50503 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 01:51:01.982213   50503 system_pods.go:59] 7 kube-system pods found
	I0729 01:51:01.982249   50503 system_pods.go:61] "coredns-6d4b75cb6d-65jsw" [3abf5dab-d5f4-415b-8bcf-a373b0480d34] Running
	I0729 01:51:01.982256   50503 system_pods.go:61] "etcd-test-preload-609534" [b80af3cb-d5da-4cc0-86a6-3369afcbbac0] Running
	I0729 01:51:01.982265   50503 system_pods.go:61] "kube-apiserver-test-preload-609534" [ec21dba2-8410-4107-b368-b34de547f459] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 01:51:01.982271   50503 system_pods.go:61] "kube-controller-manager-test-preload-609534" [dd1c9616-8290-4ffd-a479-ebc730226859] Running
	I0729 01:51:01.982280   50503 system_pods.go:61] "kube-proxy-z4zc7" [ddaf4d28-2b9e-47f4-98aa-3ad35e10a604] Running
	I0729 01:51:01.982288   50503 system_pods.go:61] "kube-scheduler-test-preload-609534" [15d0b076-ff1d-4fda-9a1f-608fc9045f89] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 01:51:01.982296   50503 system_pods.go:61] "storage-provisioner" [5ba73ad5-07d3-4694-9b15-7ac3e0465d02] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 01:51:01.982309   50503 system_pods.go:74] duration metric: took 17.478797ms to wait for pod list to return data ...
	I0729 01:51:01.982334   50503 node_conditions.go:102] verifying NodePressure condition ...
	I0729 01:51:01.985867   50503 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 01:51:01.985897   50503 node_conditions.go:123] node cpu capacity is 2
	I0729 01:51:01.985921   50503 node_conditions.go:105] duration metric: took 3.569492ms to run NodePressure ...
	I0729 01:51:01.985943   50503 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 01:51:02.228979   50503 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 01:51:02.233209   50503 kubeadm.go:739] kubelet initialised
	I0729 01:51:02.233235   50503 kubeadm.go:740] duration metric: took 4.2287ms waiting for restarted kubelet to initialise ...
	I0729 01:51:02.233244   50503 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 01:51:02.239370   50503 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-65jsw" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:02.245967   50503 pod_ready.go:97] node "test-preload-609534" hosting pod "coredns-6d4b75cb6d-65jsw" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-609534" has status "Ready":"False"
	I0729 01:51:02.246015   50503 pod_ready.go:81] duration metric: took 6.622433ms for pod "coredns-6d4b75cb6d-65jsw" in "kube-system" namespace to be "Ready" ...
	E0729 01:51:02.246025   50503 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-609534" hosting pod "coredns-6d4b75cb6d-65jsw" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-609534" has status "Ready":"False"
	I0729 01:51:02.246047   50503 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-609534" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:02.251195   50503 pod_ready.go:97] node "test-preload-609534" hosting pod "etcd-test-preload-609534" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-609534" has status "Ready":"False"
	I0729 01:51:02.251215   50503 pod_ready.go:81] duration metric: took 5.154599ms for pod "etcd-test-preload-609534" in "kube-system" namespace to be "Ready" ...
	E0729 01:51:02.251223   50503 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-609534" hosting pod "etcd-test-preload-609534" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-609534" has status "Ready":"False"
	I0729 01:51:02.251231   50503 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-609534" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:02.255394   50503 pod_ready.go:97] node "test-preload-609534" hosting pod "kube-apiserver-test-preload-609534" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-609534" has status "Ready":"False"
	I0729 01:51:02.255423   50503 pod_ready.go:81] duration metric: took 4.179881ms for pod "kube-apiserver-test-preload-609534" in "kube-system" namespace to be "Ready" ...
	E0729 01:51:02.255433   50503 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-609534" hosting pod "kube-apiserver-test-preload-609534" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-609534" has status "Ready":"False"
	I0729 01:51:02.255442   50503 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-609534" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:02.369627   50503 pod_ready.go:97] node "test-preload-609534" hosting pod "kube-controller-manager-test-preload-609534" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-609534" has status "Ready":"False"
	I0729 01:51:02.369668   50503 pod_ready.go:81] duration metric: took 114.208335ms for pod "kube-controller-manager-test-preload-609534" in "kube-system" namespace to be "Ready" ...
	E0729 01:51:02.369681   50503 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-609534" hosting pod "kube-controller-manager-test-preload-609534" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-609534" has status "Ready":"False"
	I0729 01:51:02.369690   50503 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z4zc7" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:02.769115   50503 pod_ready.go:97] node "test-preload-609534" hosting pod "kube-proxy-z4zc7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-609534" has status "Ready":"False"
	I0729 01:51:02.769151   50503 pod_ready.go:81] duration metric: took 399.448228ms for pod "kube-proxy-z4zc7" in "kube-system" namespace to be "Ready" ...
	E0729 01:51:02.769164   50503 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-609534" hosting pod "kube-proxy-z4zc7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-609534" has status "Ready":"False"
	I0729 01:51:02.769172   50503 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-609534" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:03.168412   50503 pod_ready.go:97] node "test-preload-609534" hosting pod "kube-scheduler-test-preload-609534" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-609534" has status "Ready":"False"
	I0729 01:51:03.168441   50503 pod_ready.go:81] duration metric: took 399.262535ms for pod "kube-scheduler-test-preload-609534" in "kube-system" namespace to be "Ready" ...
	E0729 01:51:03.168451   50503 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-609534" hosting pod "kube-scheduler-test-preload-609534" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-609534" has status "Ready":"False"
	I0729 01:51:03.168458   50503 pod_ready.go:38] duration metric: took 935.20724ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 01:51:03.168483   50503 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 01:51:03.181229   50503 ops.go:34] apiserver oom_adj: -16
	I0729 01:51:03.181259   50503 kubeadm.go:597] duration metric: took 8.998607831s to restartPrimaryControlPlane
	I0729 01:51:03.181269   50503 kubeadm.go:394] duration metric: took 9.04730553s to StartCluster
	I0729 01:51:03.181289   50503 settings.go:142] acquiring lock: {Name:mkb5968d4cb7e70e3ab5eb9e0fafacd5f2b8ffad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:51:03.181370   50503 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:51:03.182077   50503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/kubeconfig: {Name:mkfc86149281a82bb07035a854bdc5c590b97078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:51:03.182320   50503 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.21 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:51:03.182452   50503 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 01:51:03.182495   50503 config.go:182] Loaded profile config "test-preload-609534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0729 01:51:03.182523   50503 addons.go:69] Setting storage-provisioner=true in profile "test-preload-609534"
	I0729 01:51:03.182545   50503 addons.go:234] Setting addon storage-provisioner=true in "test-preload-609534"
	W0729 01:51:03.182551   50503 addons.go:243] addon storage-provisioner should already be in state true
	I0729 01:51:03.182548   50503 addons.go:69] Setting default-storageclass=true in profile "test-preload-609534"
	I0729 01:51:03.182573   50503 host.go:66] Checking if "test-preload-609534" exists ...
	I0729 01:51:03.182579   50503 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-609534"
	I0729 01:51:03.182880   50503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:51:03.182928   50503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:51:03.183004   50503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:51:03.183046   50503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:51:03.184345   50503 out.go:177] * Verifying Kubernetes components...
	I0729 01:51:03.185615   50503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:51:03.198190   50503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37445
	I0729 01:51:03.198637   50503 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:51:03.199127   50503 main.go:141] libmachine: Using API Version  1
	I0729 01:51:03.199152   50503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:51:03.199459   50503 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:51:03.199663   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetState
	I0729 01:51:03.202087   50503 kapi.go:59] client config for test-preload-609534: &rest.Config{Host:"https://192.168.39.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/profiles/test-preload-609534/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/profiles/test-preload-609534/client.key", CAFile:"/home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 01:51:03.202300   50503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42991
	I0729 01:51:03.202382   50503 addons.go:234] Setting addon default-storageclass=true in "test-preload-609534"
	W0729 01:51:03.202397   50503 addons.go:243] addon default-storageclass should already be in state true
	I0729 01:51:03.202426   50503 host.go:66] Checking if "test-preload-609534" exists ...
	I0729 01:51:03.202725   50503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:51:03.202740   50503 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:51:03.202761   50503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:51:03.203181   50503 main.go:141] libmachine: Using API Version  1
	I0729 01:51:03.203202   50503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:51:03.203530   50503 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:51:03.203927   50503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:51:03.203955   50503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:51:03.217483   50503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42265
	I0729 01:51:03.217541   50503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0729 01:51:03.217865   50503 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:51:03.217970   50503 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:51:03.218325   50503 main.go:141] libmachine: Using API Version  1
	I0729 01:51:03.218340   50503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:51:03.218588   50503 main.go:141] libmachine: Using API Version  1
	I0729 01:51:03.218618   50503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:51:03.218637   50503 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:51:03.218812   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetState
	I0729 01:51:03.218899   50503 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:51:03.219452   50503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:51:03.219495   50503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:51:03.220443   50503 main.go:141] libmachine: (test-preload-609534) Calling .DriverName
	I0729 01:51:03.222839   50503 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 01:51:03.224452   50503 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 01:51:03.224476   50503 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 01:51:03.224503   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHHostname
	I0729 01:51:03.227641   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:51:03.228067   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:0c:47", ip: ""} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:51:03.228089   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:51:03.228245   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHPort
	I0729 01:51:03.228442   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHKeyPath
	I0729 01:51:03.228604   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHUsername
	I0729 01:51:03.228745   50503 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/test-preload-609534/id_rsa Username:docker}
	I0729 01:51:03.236235   50503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43115
	I0729 01:51:03.236708   50503 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:51:03.237239   50503 main.go:141] libmachine: Using API Version  1
	I0729 01:51:03.237261   50503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:51:03.237608   50503 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:51:03.237798   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetState
	I0729 01:51:03.239517   50503 main.go:141] libmachine: (test-preload-609534) Calling .DriverName
	I0729 01:51:03.239736   50503 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 01:51:03.239754   50503 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 01:51:03.239772   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHHostname
	I0729 01:51:03.242880   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:51:03.243359   50503 main.go:141] libmachine: (test-preload-609534) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:0c:47", ip: ""} in network mk-test-preload-609534: {Iface:virbr1 ExpiryTime:2024-07-29 02:50:30 +0000 UTC Type:0 Mac:52:54:00:e6:0c:47 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:test-preload-609534 Clientid:01:52:54:00:e6:0c:47}
	I0729 01:51:03.243391   50503 main.go:141] libmachine: (test-preload-609534) DBG | domain test-preload-609534 has defined IP address 192.168.39.21 and MAC address 52:54:00:e6:0c:47 in network mk-test-preload-609534
	I0729 01:51:03.243557   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHPort
	I0729 01:51:03.243736   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHKeyPath
	I0729 01:51:03.243921   50503 main.go:141] libmachine: (test-preload-609534) Calling .GetSSHUsername
	I0729 01:51:03.244065   50503 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/test-preload-609534/id_rsa Username:docker}
	I0729 01:51:03.378830   50503 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:51:03.397619   50503 node_ready.go:35] waiting up to 6m0s for node "test-preload-609534" to be "Ready" ...
	I0729 01:51:03.521975   50503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 01:51:03.544146   50503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 01:51:04.521849   50503 main.go:141] libmachine: Making call to close driver server
	I0729 01:51:04.521869   50503 main.go:141] libmachine: (test-preload-609534) Calling .Close
	I0729 01:51:04.521951   50503 main.go:141] libmachine: Making call to close driver server
	I0729 01:51:04.521978   50503 main.go:141] libmachine: (test-preload-609534) Calling .Close
	I0729 01:51:04.522150   50503 main.go:141] libmachine: Successfully made call to close driver server
	I0729 01:51:04.522402   50503 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 01:51:04.522418   50503 main.go:141] libmachine: Making call to close driver server
	I0729 01:51:04.522427   50503 main.go:141] libmachine: (test-preload-609534) Calling .Close
	I0729 01:51:04.522200   50503 main.go:141] libmachine: (test-preload-609534) DBG | Closing plugin on server side
	I0729 01:51:04.522217   50503 main.go:141] libmachine: Successfully made call to close driver server
	I0729 01:51:04.522501   50503 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 01:51:04.522512   50503 main.go:141] libmachine: Making call to close driver server
	I0729 01:51:04.522521   50503 main.go:141] libmachine: (test-preload-609534) Calling .Close
	I0729 01:51:04.522227   50503 main.go:141] libmachine: (test-preload-609534) DBG | Closing plugin on server side
	I0729 01:51:04.522682   50503 main.go:141] libmachine: (test-preload-609534) DBG | Closing plugin on server side
	I0729 01:51:04.522728   50503 main.go:141] libmachine: (test-preload-609534) DBG | Closing plugin on server side
	I0729 01:51:04.522746   50503 main.go:141] libmachine: Successfully made call to close driver server
	I0729 01:51:04.522748   50503 main.go:141] libmachine: Successfully made call to close driver server
	I0729 01:51:04.522757   50503 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 01:51:04.522758   50503 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 01:51:04.529489   50503 main.go:141] libmachine: Making call to close driver server
	I0729 01:51:04.529504   50503 main.go:141] libmachine: (test-preload-609534) Calling .Close
	I0729 01:51:04.529700   50503 main.go:141] libmachine: Successfully made call to close driver server
	I0729 01:51:04.529715   50503 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 01:51:04.532370   50503 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 01:51:04.533539   50503 addons.go:510] duration metric: took 1.351093121s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 01:51:05.405306   50503 node_ready.go:53] node "test-preload-609534" has status "Ready":"False"
	I0729 01:51:07.901686   50503 node_ready.go:53] node "test-preload-609534" has status "Ready":"False"
	I0729 01:51:10.401776   50503 node_ready.go:53] node "test-preload-609534" has status "Ready":"False"
	I0729 01:51:11.402041   50503 node_ready.go:49] node "test-preload-609534" has status "Ready":"True"
	I0729 01:51:11.402064   50503 node_ready.go:38] duration metric: took 8.004411577s for node "test-preload-609534" to be "Ready" ...
	I0729 01:51:11.402071   50503 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 01:51:11.406728   50503 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-65jsw" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:11.411884   50503 pod_ready.go:92] pod "coredns-6d4b75cb6d-65jsw" in "kube-system" namespace has status "Ready":"True"
	I0729 01:51:11.411905   50503 pod_ready.go:81] duration metric: took 5.153366ms for pod "coredns-6d4b75cb6d-65jsw" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:11.411915   50503 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-609534" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:11.416023   50503 pod_ready.go:92] pod "etcd-test-preload-609534" in "kube-system" namespace has status "Ready":"True"
	I0729 01:51:11.416052   50503 pod_ready.go:81] duration metric: took 4.130155ms for pod "etcd-test-preload-609534" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:11.416064   50503 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-609534" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:13.422806   50503 pod_ready.go:102] pod "kube-apiserver-test-preload-609534" in "kube-system" namespace has status "Ready":"False"
	I0729 01:51:14.422247   50503 pod_ready.go:92] pod "kube-apiserver-test-preload-609534" in "kube-system" namespace has status "Ready":"True"
	I0729 01:51:14.422265   50503 pod_ready.go:81] duration metric: took 3.006194738s for pod "kube-apiserver-test-preload-609534" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:14.422274   50503 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-609534" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:14.426052   50503 pod_ready.go:92] pod "kube-controller-manager-test-preload-609534" in "kube-system" namespace has status "Ready":"True"
	I0729 01:51:14.426069   50503 pod_ready.go:81] duration metric: took 3.788348ms for pod "kube-controller-manager-test-preload-609534" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:14.426077   50503 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z4zc7" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:14.430223   50503 pod_ready.go:92] pod "kube-proxy-z4zc7" in "kube-system" namespace has status "Ready":"True"
	I0729 01:51:14.430242   50503 pod_ready.go:81] duration metric: took 4.156248ms for pod "kube-proxy-z4zc7" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:14.430253   50503 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-609534" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:16.437396   50503 pod_ready.go:92] pod "kube-scheduler-test-preload-609534" in "kube-system" namespace has status "Ready":"True"
	I0729 01:51:16.437418   50503 pod_ready.go:81] duration metric: took 2.007157847s for pod "kube-scheduler-test-preload-609534" in "kube-system" namespace to be "Ready" ...
	I0729 01:51:16.437428   50503 pod_ready.go:38] duration metric: took 5.035347747s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 01:51:16.437440   50503 api_server.go:52] waiting for apiserver process to appear ...
	I0729 01:51:16.437483   50503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:51:16.454578   50503 api_server.go:72] duration metric: took 13.272228868s to wait for apiserver process to appear ...
	I0729 01:51:16.454603   50503 api_server.go:88] waiting for apiserver healthz status ...
	I0729 01:51:16.454622   50503 api_server.go:253] Checking apiserver healthz at https://192.168.39.21:8443/healthz ...
	I0729 01:51:16.459648   50503 api_server.go:279] https://192.168.39.21:8443/healthz returned 200:
	ok
	I0729 01:51:16.460520   50503 api_server.go:141] control plane version: v1.24.4
	I0729 01:51:16.460537   50503 api_server.go:131] duration metric: took 5.928325ms to wait for apiserver health ...
	I0729 01:51:16.460544   50503 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 01:51:16.465477   50503 system_pods.go:59] 7 kube-system pods found
	I0729 01:51:16.465498   50503 system_pods.go:61] "coredns-6d4b75cb6d-65jsw" [3abf5dab-d5f4-415b-8bcf-a373b0480d34] Running
	I0729 01:51:16.465502   50503 system_pods.go:61] "etcd-test-preload-609534" [b80af3cb-d5da-4cc0-86a6-3369afcbbac0] Running
	I0729 01:51:16.465506   50503 system_pods.go:61] "kube-apiserver-test-preload-609534" [ec21dba2-8410-4107-b368-b34de547f459] Running
	I0729 01:51:16.465515   50503 system_pods.go:61] "kube-controller-manager-test-preload-609534" [dd1c9616-8290-4ffd-a479-ebc730226859] Running
	I0729 01:51:16.465519   50503 system_pods.go:61] "kube-proxy-z4zc7" [ddaf4d28-2b9e-47f4-98aa-3ad35e10a604] Running
	I0729 01:51:16.465522   50503 system_pods.go:61] "kube-scheduler-test-preload-609534" [15d0b076-ff1d-4fda-9a1f-608fc9045f89] Running
	I0729 01:51:16.465525   50503 system_pods.go:61] "storage-provisioner" [5ba73ad5-07d3-4694-9b15-7ac3e0465d02] Running
	I0729 01:51:16.465530   50503 system_pods.go:74] duration metric: took 4.981859ms to wait for pod list to return data ...
	I0729 01:51:16.465536   50503 default_sa.go:34] waiting for default service account to be created ...
	I0729 01:51:16.601814   50503 default_sa.go:45] found service account: "default"
	I0729 01:51:16.601839   50503 default_sa.go:55] duration metric: took 136.296955ms for default service account to be created ...
	I0729 01:51:16.601847   50503 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 01:51:16.804651   50503 system_pods.go:86] 7 kube-system pods found
	I0729 01:51:16.804691   50503 system_pods.go:89] "coredns-6d4b75cb6d-65jsw" [3abf5dab-d5f4-415b-8bcf-a373b0480d34] Running
	I0729 01:51:16.804697   50503 system_pods.go:89] "etcd-test-preload-609534" [b80af3cb-d5da-4cc0-86a6-3369afcbbac0] Running
	I0729 01:51:16.804701   50503 system_pods.go:89] "kube-apiserver-test-preload-609534" [ec21dba2-8410-4107-b368-b34de547f459] Running
	I0729 01:51:16.804705   50503 system_pods.go:89] "kube-controller-manager-test-preload-609534" [dd1c9616-8290-4ffd-a479-ebc730226859] Running
	I0729 01:51:16.804708   50503 system_pods.go:89] "kube-proxy-z4zc7" [ddaf4d28-2b9e-47f4-98aa-3ad35e10a604] Running
	I0729 01:51:16.804712   50503 system_pods.go:89] "kube-scheduler-test-preload-609534" [15d0b076-ff1d-4fda-9a1f-608fc9045f89] Running
	I0729 01:51:16.804715   50503 system_pods.go:89] "storage-provisioner" [5ba73ad5-07d3-4694-9b15-7ac3e0465d02] Running
	I0729 01:51:16.804720   50503 system_pods.go:126] duration metric: took 202.867947ms to wait for k8s-apps to be running ...
	I0729 01:51:16.804732   50503 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 01:51:16.804772   50503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:51:16.820007   50503 system_svc.go:56] duration metric: took 15.272529ms WaitForService to wait for kubelet
	I0729 01:51:16.820037   50503 kubeadm.go:582] duration metric: took 13.637689668s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 01:51:16.820060   50503 node_conditions.go:102] verifying NodePressure condition ...
	I0729 01:51:17.003329   50503 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 01:51:17.003353   50503 node_conditions.go:123] node cpu capacity is 2
	I0729 01:51:17.003362   50503 node_conditions.go:105] duration metric: took 183.298387ms to run NodePressure ...
	I0729 01:51:17.003373   50503 start.go:241] waiting for startup goroutines ...
	I0729 01:51:17.003380   50503 start.go:246] waiting for cluster config update ...
	I0729 01:51:17.003389   50503 start.go:255] writing updated cluster config ...
	I0729 01:51:17.003656   50503 ssh_runner.go:195] Run: rm -f paused
	I0729 01:51:17.049396   50503 start.go:600] kubectl: 1.30.3, cluster: 1.24.4 (minor skew: 6)
	I0729 01:51:17.051157   50503 out.go:177] 
	W0729 01:51:17.052283   50503 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0729 01:51:17.053345   50503 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0729 01:51:17.054383   50503 out.go:177] * Done! kubectl is now configured to use "test-preload-609534" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 01:51:17 test-preload-609534 crio[686]: time="2024-07-29 01:51:17.963994262Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6dcb311-8c1c-44fe-9bfc-71730af5ce9e name=/runtime.v1.RuntimeService/Version
	Jul 29 01:51:17 test-preload-609534 crio[686]: time="2024-07-29 01:51:17.965170645Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7821cad1-08ed-460c-a6c9-461f26cc77d7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:51:17 test-preload-609534 crio[686]: time="2024-07-29 01:51:17.965613775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722217877965586091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7821cad1-08ed-460c-a6c9-461f26cc77d7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:51:17 test-preload-609534 crio[686]: time="2024-07-29 01:51:17.966120304Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d477d398-b16b-46ba-9153-c41829a3c12d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:51:17 test-preload-609534 crio[686]: time="2024-07-29 01:51:17.966188003Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d477d398-b16b-46ba-9153-c41829a3c12d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:51:17 test-preload-609534 crio[686]: time="2024-07-29 01:51:17.966364007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15272be60aa0111b46c18c0aaeade293e73d2fb0fcec24070a91af52fcf06dd1,PodSandboxId:87260b9ccc7d01fd179557eaed08e15af236d5059be0fe919d9b2f8354d067b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722217868931738474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-65jsw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abf5dab-d5f4-415b-8bcf-a373b0480d34,},Annotations:map[string]string{io.kubernetes.container.hash: cdb6a4de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e200c2e0d4ee10f9bfb583b8c404bad9320e1f4a269238b8677bbfe6fded925,PodSandboxId:b91299a5281b6b9c5a36e386f18338a9fd18e157205ab0cfa3a0cdf1ae9d36ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722217861953632978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 5ba73ad5-07d3-4694-9b15-7ac3e0465d02,},Annotations:map[string]string{io.kubernetes.container.hash: 2a1b3341,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:062e24783c79eac80f7e21a4fa78f4e0555f94d37438cefc3a9fa81527437f41,PodSandboxId:6b51e08c747f0735fce7e0d17ef8cf25ade66bcfb7a7d35d53703b737411f327,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722217861962052562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z4zc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd
af4d28-2b9e-47f4-98aa-3ad35e10a604,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd74fc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6fcb438913d6feec5ddd4bcf6c3e937c2776087fb92b1bc35ab22a1c9f69a0e,PodSandboxId:df68c832f647756f1524952e36a8acff9593728ced19b77361682d4ba50998b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722217856556315714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b10b5fb6
f97bbbeb0e8fcd5dcdccfda,},Annotations:map[string]string{io.kubernetes.container.hash: ad1cf483,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b715941c3f712059e0413e106b549f147c6de6a582c2f6ea88bf855ed27d1835,PodSandboxId:ba692dedf638acd1cddb61543d2f7b35a2adc8d7ed92bab4cbbaa7fad1f67243,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722217856545278463,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d74999123251df867c
a4c3a8975d05,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308a9e54638572ef0220ebd1bfc53e933f63ad1d2c852eb89a00554998784b2e,PodSandboxId:ebbc662d4888f3e03d6de8bd39fb7d22105a597d8533e87fbd0e31caa6acd678,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722217856561584272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0342
f2595f9419edfb2fca0525517de8,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732cd97570a3b0a740efb2032f453cee60ea34f64b9e7251dbc24d625522e646,PodSandboxId:a5b7160fd3a284271ed7817ce2aaf9dd5726ed66fb3e1c55d5bd8c6da439781d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722217856483473809,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5587748ce7ab418eda606e5da42c90a5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 443a1b6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d477d398-b16b-46ba-9153-c41829a3c12d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.003321328Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29499f72-bc26-472a-8e2a-eee2a16ffcbe name=/runtime.v1.RuntimeService/Version
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.003410862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29499f72-bc26-472a-8e2a-eee2a16ffcbe name=/runtime.v1.RuntimeService/Version
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.005488029Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e9e8dd20-d11f-453c-a9b4-a383b5c01d1b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.005976588Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722217878005954210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9e8dd20-d11f-453c-a9b4-a383b5c01d1b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.006381991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd6de556-bce5-4fce-bc3f-535de976ec35 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.006431052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd6de556-bce5-4fce-bc3f-535de976ec35 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.006614626Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15272be60aa0111b46c18c0aaeade293e73d2fb0fcec24070a91af52fcf06dd1,PodSandboxId:87260b9ccc7d01fd179557eaed08e15af236d5059be0fe919d9b2f8354d067b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722217868931738474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-65jsw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abf5dab-d5f4-415b-8bcf-a373b0480d34,},Annotations:map[string]string{io.kubernetes.container.hash: cdb6a4de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e200c2e0d4ee10f9bfb583b8c404bad9320e1f4a269238b8677bbfe6fded925,PodSandboxId:b91299a5281b6b9c5a36e386f18338a9fd18e157205ab0cfa3a0cdf1ae9d36ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722217861953632978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 5ba73ad5-07d3-4694-9b15-7ac3e0465d02,},Annotations:map[string]string{io.kubernetes.container.hash: 2a1b3341,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:062e24783c79eac80f7e21a4fa78f4e0555f94d37438cefc3a9fa81527437f41,PodSandboxId:6b51e08c747f0735fce7e0d17ef8cf25ade66bcfb7a7d35d53703b737411f327,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722217861962052562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z4zc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd
af4d28-2b9e-47f4-98aa-3ad35e10a604,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd74fc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6fcb438913d6feec5ddd4bcf6c3e937c2776087fb92b1bc35ab22a1c9f69a0e,PodSandboxId:df68c832f647756f1524952e36a8acff9593728ced19b77361682d4ba50998b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722217856556315714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b10b5fb6
f97bbbeb0e8fcd5dcdccfda,},Annotations:map[string]string{io.kubernetes.container.hash: ad1cf483,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b715941c3f712059e0413e106b549f147c6de6a582c2f6ea88bf855ed27d1835,PodSandboxId:ba692dedf638acd1cddb61543d2f7b35a2adc8d7ed92bab4cbbaa7fad1f67243,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722217856545278463,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d74999123251df867c
a4c3a8975d05,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308a9e54638572ef0220ebd1bfc53e933f63ad1d2c852eb89a00554998784b2e,PodSandboxId:ebbc662d4888f3e03d6de8bd39fb7d22105a597d8533e87fbd0e31caa6acd678,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722217856561584272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0342
f2595f9419edfb2fca0525517de8,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732cd97570a3b0a740efb2032f453cee60ea34f64b9e7251dbc24d625522e646,PodSandboxId:a5b7160fd3a284271ed7817ce2aaf9dd5726ed66fb3e1c55d5bd8c6da439781d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722217856483473809,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5587748ce7ab418eda606e5da42c90a5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 443a1b6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd6de556-bce5-4fce-bc3f-535de976ec35 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.039495391Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e2f08e7-5e46-480b-b649-7a8a9c8389e5 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.039584047Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e2f08e7-5e46-480b-b649-7a8a9c8389e5 name=/runtime.v1.RuntimeService/Version
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.041073752Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=72f719b5-195b-46e5-a00b-ff3a8a419c53 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.041517680Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722217878041496296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72f719b5-195b-46e5-a00b-ff3a8a419c53 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.042097509Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31921336-0022-43a8-9ad8-1c9b37fd9be4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.042146364Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31921336-0022-43a8-9ad8-1c9b37fd9be4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.042339890Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15272be60aa0111b46c18c0aaeade293e73d2fb0fcec24070a91af52fcf06dd1,PodSandboxId:87260b9ccc7d01fd179557eaed08e15af236d5059be0fe919d9b2f8354d067b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722217868931738474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-65jsw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abf5dab-d5f4-415b-8bcf-a373b0480d34,},Annotations:map[string]string{io.kubernetes.container.hash: cdb6a4de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e200c2e0d4ee10f9bfb583b8c404bad9320e1f4a269238b8677bbfe6fded925,PodSandboxId:b91299a5281b6b9c5a36e386f18338a9fd18e157205ab0cfa3a0cdf1ae9d36ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722217861953632978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 5ba73ad5-07d3-4694-9b15-7ac3e0465d02,},Annotations:map[string]string{io.kubernetes.container.hash: 2a1b3341,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:062e24783c79eac80f7e21a4fa78f4e0555f94d37438cefc3a9fa81527437f41,PodSandboxId:6b51e08c747f0735fce7e0d17ef8cf25ade66bcfb7a7d35d53703b737411f327,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722217861962052562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z4zc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd
af4d28-2b9e-47f4-98aa-3ad35e10a604,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd74fc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6fcb438913d6feec5ddd4bcf6c3e937c2776087fb92b1bc35ab22a1c9f69a0e,PodSandboxId:df68c832f647756f1524952e36a8acff9593728ced19b77361682d4ba50998b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722217856556315714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b10b5fb6
f97bbbeb0e8fcd5dcdccfda,},Annotations:map[string]string{io.kubernetes.container.hash: ad1cf483,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b715941c3f712059e0413e106b549f147c6de6a582c2f6ea88bf855ed27d1835,PodSandboxId:ba692dedf638acd1cddb61543d2f7b35a2adc8d7ed92bab4cbbaa7fad1f67243,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722217856545278463,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d74999123251df867c
a4c3a8975d05,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308a9e54638572ef0220ebd1bfc53e933f63ad1d2c852eb89a00554998784b2e,PodSandboxId:ebbc662d4888f3e03d6de8bd39fb7d22105a597d8533e87fbd0e31caa6acd678,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722217856561584272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0342
f2595f9419edfb2fca0525517de8,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732cd97570a3b0a740efb2032f453cee60ea34f64b9e7251dbc24d625522e646,PodSandboxId:a5b7160fd3a284271ed7817ce2aaf9dd5726ed66fb3e1c55d5bd8c6da439781d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722217856483473809,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5587748ce7ab418eda606e5da42c90a5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 443a1b6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31921336-0022-43a8-9ad8-1c9b37fd9be4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.054187158Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=dcf95c9b-c88c-4221-aef5-b7b0c9ea8eaa name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.054373693Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:87260b9ccc7d01fd179557eaed08e15af236d5059be0fe919d9b2f8354d067b3,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-65jsw,Uid:3abf5dab-d5f4-415b-8bcf-a373b0480d34,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722217868706619153,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-65jsw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abf5dab-d5f4-415b-8bcf-a373b0480d34,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T01:51:00.797020266Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b91299a5281b6b9c5a36e386f18338a9fd18e157205ab0cfa3a0cdf1ae9d36ba,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:5ba73ad5-07d3-4694-9b15-7ac3e0465d02,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722217861717513934,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ba73ad5-07d3-4694-9b15-7ac3e0465d02,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T01:51:00.796992593Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6b51e08c747f0735fce7e0d17ef8cf25ade66bcfb7a7d35d53703b737411f327,Metadata:&PodSandboxMetadata{Name:kube-proxy-z4zc7,Uid:ddaf4d28-2b9e-47f4-98aa-3ad35e10a604,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722217861716553063,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-z4zc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddaf4d28-2b9e-47f4-98aa-3ad35e10a604,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T01:51:00.797025055Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ebbc662d4888f3e03d6de8bd39fb7d22105a597d8533e87fbd0e31caa6acd678,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-609534,Ui
d:0342f2595f9419edfb2fca0525517de8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722217856339467512,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0342f2595f9419edfb2fca0525517de8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0342f2595f9419edfb2fca0525517de8,kubernetes.io/config.seen: 2024-07-29T01:50:55.821601382Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a5b7160fd3a284271ed7817ce2aaf9dd5726ed66fb3e1c55d5bd8c6da439781d,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-609534,Uid:5587748ce7ab418eda606e5da42c90a5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722217856335751256,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-609534,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 5587748ce7ab418eda606e5da42c90a5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.21:2379,kubernetes.io/config.hash: 5587748ce7ab418eda606e5da42c90a5,kubernetes.io/config.seen: 2024-07-29T01:50:55.864003496Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ba692dedf638acd1cddb61543d2f7b35a2adc8d7ed92bab4cbbaa7fad1f67243,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-609534,Uid:63d74999123251df867ca4c3a8975d05,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722217856328466596,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d74999123251df867ca4c3a8975d05,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 63d74999123251df867ca4c3a8975d05,kubernetes.io/config.seen: 2024-07-29T01:
50:55.821602602Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:df68c832f647756f1524952e36a8acff9593728ced19b77361682d4ba50998b9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-609534,Uid:0b10b5fb6f97bbbeb0e8fcd5dcdccfda,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722217856315701921,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b10b5fb6f97bbbeb0e8fcd5dcdccfda,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.21:8443,kubernetes.io/config.hash: 0b10b5fb6f97bbbeb0e8fcd5dcdccfda,kubernetes.io/config.seen: 2024-07-29T01:50:55.821575973Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=dcf95c9b-c88c-4221-aef5-b7b0c9ea8eaa name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.055134712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c0adbcb-c16f-4dd6-accd-94b34256429c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.055248928Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c0adbcb-c16f-4dd6-accd-94b34256429c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 01:51:18 test-preload-609534 crio[686]: time="2024-07-29 01:51:18.055399974Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15272be60aa0111b46c18c0aaeade293e73d2fb0fcec24070a91af52fcf06dd1,PodSandboxId:87260b9ccc7d01fd179557eaed08e15af236d5059be0fe919d9b2f8354d067b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722217868931738474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-65jsw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3abf5dab-d5f4-415b-8bcf-a373b0480d34,},Annotations:map[string]string{io.kubernetes.container.hash: cdb6a4de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e200c2e0d4ee10f9bfb583b8c404bad9320e1f4a269238b8677bbfe6fded925,PodSandboxId:b91299a5281b6b9c5a36e386f18338a9fd18e157205ab0cfa3a0cdf1ae9d36ba,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722217861953632978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 5ba73ad5-07d3-4694-9b15-7ac3e0465d02,},Annotations:map[string]string{io.kubernetes.container.hash: 2a1b3341,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:062e24783c79eac80f7e21a4fa78f4e0555f94d37438cefc3a9fa81527437f41,PodSandboxId:6b51e08c747f0735fce7e0d17ef8cf25ade66bcfb7a7d35d53703b737411f327,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722217861962052562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z4zc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd
af4d28-2b9e-47f4-98aa-3ad35e10a604,},Annotations:map[string]string{io.kubernetes.container.hash: 1dd74fc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6fcb438913d6feec5ddd4bcf6c3e937c2776087fb92b1bc35ab22a1c9f69a0e,PodSandboxId:df68c832f647756f1524952e36a8acff9593728ced19b77361682d4ba50998b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722217856556315714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b10b5fb6
f97bbbeb0e8fcd5dcdccfda,},Annotations:map[string]string{io.kubernetes.container.hash: ad1cf483,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b715941c3f712059e0413e106b549f147c6de6a582c2f6ea88bf855ed27d1835,PodSandboxId:ba692dedf638acd1cddb61543d2f7b35a2adc8d7ed92bab4cbbaa7fad1f67243,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722217856545278463,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63d74999123251df867c
a4c3a8975d05,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:308a9e54638572ef0220ebd1bfc53e933f63ad1d2c852eb89a00554998784b2e,PodSandboxId:ebbc662d4888f3e03d6de8bd39fb7d22105a597d8533e87fbd0e31caa6acd678,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722217856561584272,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0342
f2595f9419edfb2fca0525517de8,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:732cd97570a3b0a740efb2032f453cee60ea34f64b9e7251dbc24d625522e646,PodSandboxId:a5b7160fd3a284271ed7817ce2aaf9dd5726ed66fb3e1c55d5bd8c6da439781d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722217856483473809,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-609534,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5587748ce7ab418eda606e5da42c90a5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 443a1b6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c0adbcb-c16f-4dd6-accd-94b34256429c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	15272be60aa01       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   9 seconds ago       Running             coredns                   1                   87260b9ccc7d0       coredns-6d4b75cb6d-65jsw
	062e24783c79e       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   16 seconds ago      Running             kube-proxy                1                   6b51e08c747f0       kube-proxy-z4zc7
	6e200c2e0d4ee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   b91299a5281b6       storage-provisioner
	308a9e5463857       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   ebbc662d4888f       kube-controller-manager-test-preload-609534
	e6fcb438913d6       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   df68c832f6477       kube-apiserver-test-preload-609534
	b715941c3f712       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   ba692dedf638a       kube-scheduler-test-preload-609534
	732cd97570a3b       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   a5b7160fd3a28       etcd-test-preload-609534
	
	
	==> coredns [15272be60aa0111b46c18c0aaeade293e73d2fb0fcec24070a91af52fcf06dd1] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:51937 - 37737 "HINFO IN 8818203314730922751.8387541515440915029. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014367443s
	
	
	==> describe nodes <==
	Name:               test-preload-609534
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-609534
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=test-preload-609534
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T01_49_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:49:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-609534
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 01:51:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:51:11 +0000   Mon, 29 Jul 2024 01:49:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:51:11 +0000   Mon, 29 Jul 2024 01:49:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:51:11 +0000   Mon, 29 Jul 2024 01:49:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:51:11 +0000   Mon, 29 Jul 2024 01:51:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.21
	  Hostname:    test-preload-609534
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f39f485b770d453997bef15c14c6bc4a
	  System UUID:                f39f485b-770d-4539-97be-f15c14c6bc4a
	  Boot ID:                    a3e0a7ee-ba92-4b24-b836-2f0cc1113d5f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-65jsw                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     97s
	  kube-system                 etcd-test-preload-609534                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         108s
	  kube-system                 kube-apiserver-test-preload-609534             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-controller-manager-test-preload-609534    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kube-proxy-z4zc7                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-scheduler-test-preload-609534             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 15s                  kube-proxy       
	  Normal  Starting                 94s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  117s (x5 over 117s)  kubelet          Node test-preload-609534 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x5 over 117s)  kubelet          Node test-preload-609534 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x5 over 117s)  kubelet          Node test-preload-609534 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  109s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  109s                 kubelet          Node test-preload-609534 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s                 kubelet          Node test-preload-609534 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s                 kubelet          Node test-preload-609534 status is now: NodeHasSufficientPID
	  Normal  NodeReady                99s                  kubelet          Node test-preload-609534 status is now: NodeReady
	  Normal  RegisteredNode           97s                  node-controller  Node test-preload-609534 event: Registered Node test-preload-609534 in Controller
	  Normal  Starting                 23s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22s (x8 over 23s)    kubelet          Node test-preload-609534 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 23s)    kubelet          Node test-preload-609534 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 23s)    kubelet          Node test-preload-609534 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5s                   node-controller  Node test-preload-609534 event: Registered Node test-preload-609534 in Controller
	
	
	==> dmesg <==
	[Jul29 01:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050691] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040060] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.790024] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.524890] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.593793] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.637978] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.063140] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057334] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.187291] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.114982] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.273999] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[ +12.637930] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +0.068228] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.994119] systemd-fstab-generator[1147]: Ignoring "noauto" option for root device
	[Jul29 01:51] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.351018] systemd-fstab-generator[1784]: Ignoring "noauto" option for root device
	[  +5.471081] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [732cd97570a3b0a740efb2032f453cee60ea34f64b9e7251dbc24d625522e646] <==
	{"level":"info","ts":"2024-07-29T01:50:56.802Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"3c2bdad7569acae7","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-29T01:50:56.813Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T01:50:56.816Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 switched to configuration voters=(4335799684680043239)"}
	{"level":"info","ts":"2024-07-29T01:50:56.817Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f019a0e2d3e7d785","local-member-id":"3c2bdad7569acae7","added-peer-id":"3c2bdad7569acae7","added-peer-peer-urls":["https://192.168.39.21:2380"]}
	{"level":"info","ts":"2024-07-29T01:50:56.818Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f019a0e2d3e7d785","local-member-id":"3c2bdad7569acae7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:50:56.819Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:50:56.835Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T01:50:56.838Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.21:2380"}
	{"level":"info","ts":"2024-07-29T01:50:56.838Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.21:2380"}
	{"level":"info","ts":"2024-07-29T01:50:56.838Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3c2bdad7569acae7","initial-advertise-peer-urls":["https://192.168.39.21:2380"],"listen-peer-urls":["https://192.168.39.21:2380"],"advertise-client-urls":["https://192.168.39.21:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.21:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T01:50:56.838Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T01:50:58.236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T01:50:58.237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T01:50:58.237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 received MsgPreVoteResp from 3c2bdad7569acae7 at term 2"}
	{"level":"info","ts":"2024-07-29T01:50:58.237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T01:50:58.237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 received MsgVoteResp from 3c2bdad7569acae7 at term 3"}
	{"level":"info","ts":"2024-07-29T01:50:58.237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3c2bdad7569acae7 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T01:50:58.237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3c2bdad7569acae7 elected leader 3c2bdad7569acae7 at term 3"}
	{"level":"info","ts":"2024-07-29T01:50:58.237Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"3c2bdad7569acae7","local-member-attributes":"{Name:test-preload-609534 ClientURLs:[https://192.168.39.21:2379]}","request-path":"/0/members/3c2bdad7569acae7/attributes","cluster-id":"f019a0e2d3e7d785","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T01:50:58.237Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T01:50:58.239Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T01:50:58.240Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T01:50:58.241Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.21:2379"}
	{"level":"info","ts":"2024-07-29T01:50:58.247Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T01:50:58.247Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 01:51:18 up 0 min,  0 users,  load average: 0.47, 0.15, 0.05
	Linux test-preload-609534 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e6fcb438913d6feec5ddd4bcf6c3e937c2776087fb92b1bc35ab22a1c9f69a0e] <==
	I0729 01:51:00.623453       1 controller.go:85] Starting OpenAPI controller
	I0729 01:51:00.623466       1 controller.go:85] Starting OpenAPI V3 controller
	I0729 01:51:00.623493       1 naming_controller.go:291] Starting NamingConditionController
	I0729 01:51:00.623514       1 establishing_controller.go:76] Starting EstablishingController
	I0729 01:51:00.623691       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0729 01:51:00.623704       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0729 01:51:00.623720       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0729 01:51:00.738360       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 01:51:00.747661       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 01:51:00.773245       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0729 01:51:00.775187       1 cache.go:39] Caches are synced for autoregister controller
	I0729 01:51:00.792135       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 01:51:00.800510       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0729 01:51:00.813755       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0729 01:51:00.816301       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0729 01:51:01.305510       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0729 01:51:01.620134       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 01:51:02.130176       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0729 01:51:02.144289       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0729 01:51:02.180437       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0729 01:51:02.208013       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 01:51:02.214770       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 01:51:02.384247       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0729 01:51:13.228144       1 controller.go:611] quota admission added evaluator for: endpoints
	I0729 01:51:13.328750       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [308a9e54638572ef0220ebd1bfc53e933f63ad1d2c852eb89a00554998784b2e] <==
	I0729 01:51:13.221652       1 shared_informer.go:262] Caches are synced for stateful set
	I0729 01:51:13.231959       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 01:51:13.255494       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 01:51:13.270268       1 shared_informer.go:262] Caches are synced for disruption
	I0729 01:51:13.270366       1 disruption.go:371] Sending events to api server.
	W0729 01:51:13.281155       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="test-preload-609534" does not exist
	I0729 01:51:13.283272       1 shared_informer.go:262] Caches are synced for GC
	I0729 01:51:13.291460       1 shared_informer.go:262] Caches are synced for node
	I0729 01:51:13.291487       1 range_allocator.go:173] Starting range CIDR allocator
	I0729 01:51:13.291492       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0729 01:51:13.291500       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0729 01:51:13.310648       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0729 01:51:13.326953       1 shared_informer.go:262] Caches are synced for persistent volume
	I0729 01:51:13.329068       1 shared_informer.go:262] Caches are synced for TTL
	I0729 01:51:13.330390       1 shared_informer.go:262] Caches are synced for taint
	I0729 01:51:13.330504       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0729 01:51:13.330680       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-609534. Assuming now as a timestamp.
	I0729 01:51:13.330730       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0729 01:51:13.330678       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0729 01:51:13.331029       1 event.go:294] "Event occurred" object="test-preload-609534" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-609534 event: Registered Node test-preload-609534 in Controller"
	I0729 01:51:13.335493       1 shared_informer.go:262] Caches are synced for attach detach
	I0729 01:51:13.365169       1 shared_informer.go:262] Caches are synced for daemon sets
	I0729 01:51:13.776287       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 01:51:13.811202       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 01:51:13.811251       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [062e24783c79eac80f7e21a4fa78f4e0555f94d37438cefc3a9fa81527437f41] <==
	I0729 01:51:02.330180       1 node.go:163] Successfully retrieved node IP: 192.168.39.21
	I0729 01:51:02.330323       1 server_others.go:138] "Detected node IP" address="192.168.39.21"
	I0729 01:51:02.330363       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0729 01:51:02.372356       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0729 01:51:02.372373       1 server_others.go:206] "Using iptables Proxier"
	I0729 01:51:02.373198       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0729 01:51:02.374201       1 server.go:661] "Version info" version="v1.24.4"
	I0729 01:51:02.374211       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:51:02.376149       1 config.go:317] "Starting service config controller"
	I0729 01:51:02.376194       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0729 01:51:02.376233       1 config.go:226] "Starting endpoint slice config controller"
	I0729 01:51:02.376254       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0729 01:51:02.377910       1 config.go:444] "Starting node config controller"
	I0729 01:51:02.377951       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0729 01:51:02.476589       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0729 01:51:02.476790       1 shared_informer.go:262] Caches are synced for service config
	I0729 01:51:02.478080       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [b715941c3f712059e0413e106b549f147c6de6a582c2f6ea88bf855ed27d1835] <==
	I0729 01:50:57.575065       1 serving.go:348] Generated self-signed cert in-memory
	W0729 01:51:00.694317       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 01:51:00.694970       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 01:51:00.695134       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 01:51:00.695225       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 01:51:00.747110       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0729 01:51:00.748283       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:51:00.756666       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0729 01:51:00.756977       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 01:51:00.759394       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 01:51:00.757002       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 01:51:00.860426       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 01:51:00 test-preload-609534 kubelet[1154]: I0729 01:51:00.791221    1154 apiserver.go:52] "Watching apiserver"
	Jul 29 01:51:00 test-preload-609534 kubelet[1154]: I0729 01:51:00.797161    1154 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 01:51:00 test-preload-609534 kubelet[1154]: I0729 01:51:00.797305    1154 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 01:51:00 test-preload-609534 kubelet[1154]: I0729 01:51:00.797384    1154 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 01:51:00 test-preload-609534 kubelet[1154]: E0729 01:51:00.799303    1154 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-65jsw" podUID=3abf5dab-d5f4-415b-8bcf-a373b0480d34
	Jul 29 01:51:00 test-preload-609534 kubelet[1154]: I0729 01:51:00.865417    1154 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnl24\" (UniqueName: \"kubernetes.io/projected/5ba73ad5-07d3-4694-9b15-7ac3e0465d02-kube-api-access-rnl24\") pod \"storage-provisioner\" (UID: \"5ba73ad5-07d3-4694-9b15-7ac3e0465d02\") " pod="kube-system/storage-provisioner"
	Jul 29 01:51:00 test-preload-609534 kubelet[1154]: I0729 01:51:00.865772    1154 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ddaf4d28-2b9e-47f4-98aa-3ad35e10a604-kube-proxy\") pod \"kube-proxy-z4zc7\" (UID: \"ddaf4d28-2b9e-47f4-98aa-3ad35e10a604\") " pod="kube-system/kube-proxy-z4zc7"
	Jul 29 01:51:00 test-preload-609534 kubelet[1154]: I0729 01:51:00.865971    1154 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m22jc\" (UniqueName: \"kubernetes.io/projected/ddaf4d28-2b9e-47f4-98aa-3ad35e10a604-kube-api-access-m22jc\") pod \"kube-proxy-z4zc7\" (UID: \"ddaf4d28-2b9e-47f4-98aa-3ad35e10a604\") " pod="kube-system/kube-proxy-z4zc7"
	Jul 29 01:51:00 test-preload-609534 kubelet[1154]: I0729 01:51:00.866019    1154 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5ba73ad5-07d3-4694-9b15-7ac3e0465d02-tmp\") pod \"storage-provisioner\" (UID: \"5ba73ad5-07d3-4694-9b15-7ac3e0465d02\") " pod="kube-system/storage-provisioner"
	Jul 29 01:51:00 test-preload-609534 kubelet[1154]: I0729 01:51:00.866051    1154 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3abf5dab-d5f4-415b-8bcf-a373b0480d34-config-volume\") pod \"coredns-6d4b75cb6d-65jsw\" (UID: \"3abf5dab-d5f4-415b-8bcf-a373b0480d34\") " pod="kube-system/coredns-6d4b75cb6d-65jsw"
	Jul 29 01:51:00 test-preload-609534 kubelet[1154]: I0729 01:51:00.866070    1154 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-725rr\" (UniqueName: \"kubernetes.io/projected/3abf5dab-d5f4-415b-8bcf-a373b0480d34-kube-api-access-725rr\") pod \"coredns-6d4b75cb6d-65jsw\" (UID: \"3abf5dab-d5f4-415b-8bcf-a373b0480d34\") " pod="kube-system/coredns-6d4b75cb6d-65jsw"
	Jul 29 01:51:00 test-preload-609534 kubelet[1154]: I0729 01:51:00.866269    1154 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddaf4d28-2b9e-47f4-98aa-3ad35e10a604-xtables-lock\") pod \"kube-proxy-z4zc7\" (UID: \"ddaf4d28-2b9e-47f4-98aa-3ad35e10a604\") " pod="kube-system/kube-proxy-z4zc7"
	Jul 29 01:51:00 test-preload-609534 kubelet[1154]: I0729 01:51:00.866294    1154 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddaf4d28-2b9e-47f4-98aa-3ad35e10a604-lib-modules\") pod \"kube-proxy-z4zc7\" (UID: \"ddaf4d28-2b9e-47f4-98aa-3ad35e10a604\") " pod="kube-system/kube-proxy-z4zc7"
	Jul 29 01:51:00 test-preload-609534 kubelet[1154]: I0729 01:51:00.866313    1154 reconciler.go:159] "Reconciler: start to sync state"
	Jul 29 01:51:00 test-preload-609534 kubelet[1154]: E0729 01:51:00.884748    1154 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 29 01:51:00 test-preload-609534 kubelet[1154]: E0729 01:51:00.970688    1154 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 01:51:00 test-preload-609534 kubelet[1154]: E0729 01:51:00.970789    1154 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3abf5dab-d5f4-415b-8bcf-a373b0480d34-config-volume podName:3abf5dab-d5f4-415b-8bcf-a373b0480d34 nodeName:}" failed. No retries permitted until 2024-07-29 01:51:01.470758521 +0000 UTC m=+5.799340294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3abf5dab-d5f4-415b-8bcf-a373b0480d34-config-volume") pod "coredns-6d4b75cb6d-65jsw" (UID: "3abf5dab-d5f4-415b-8bcf-a373b0480d34") : object "kube-system"/"coredns" not registered
	Jul 29 01:51:01 test-preload-609534 kubelet[1154]: E0729 01:51:01.475055    1154 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 01:51:01 test-preload-609534 kubelet[1154]: E0729 01:51:01.475153    1154 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3abf5dab-d5f4-415b-8bcf-a373b0480d34-config-volume podName:3abf5dab-d5f4-415b-8bcf-a373b0480d34 nodeName:}" failed. No retries permitted until 2024-07-29 01:51:02.475130575 +0000 UTC m=+6.803712351 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3abf5dab-d5f4-415b-8bcf-a373b0480d34-config-volume") pod "coredns-6d4b75cb6d-65jsw" (UID: "3abf5dab-d5f4-415b-8bcf-a373b0480d34") : object "kube-system"/"coredns" not registered
	Jul 29 01:51:02 test-preload-609534 kubelet[1154]: E0729 01:51:02.483281    1154 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 01:51:02 test-preload-609534 kubelet[1154]: E0729 01:51:02.483369    1154 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3abf5dab-d5f4-415b-8bcf-a373b0480d34-config-volume podName:3abf5dab-d5f4-415b-8bcf-a373b0480d34 nodeName:}" failed. No retries permitted until 2024-07-29 01:51:04.483334316 +0000 UTC m=+8.811916078 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3abf5dab-d5f4-415b-8bcf-a373b0480d34-config-volume") pod "coredns-6d4b75cb6d-65jsw" (UID: "3abf5dab-d5f4-415b-8bcf-a373b0480d34") : object "kube-system"/"coredns" not registered
	Jul 29 01:51:02 test-preload-609534 kubelet[1154]: E0729 01:51:02.899647    1154 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-65jsw" podUID=3abf5dab-d5f4-415b-8bcf-a373b0480d34
	Jul 29 01:51:04 test-preload-609534 kubelet[1154]: E0729 01:51:04.498962    1154 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 01:51:04 test-preload-609534 kubelet[1154]: E0729 01:51:04.499120    1154 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3abf5dab-d5f4-415b-8bcf-a373b0480d34-config-volume podName:3abf5dab-d5f4-415b-8bcf-a373b0480d34 nodeName:}" failed. No retries permitted until 2024-07-29 01:51:08.499099354 +0000 UTC m=+12.827681128 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3abf5dab-d5f4-415b-8bcf-a373b0480d34-config-volume") pod "coredns-6d4b75cb6d-65jsw" (UID: "3abf5dab-d5f4-415b-8bcf-a373b0480d34") : object "kube-system"/"coredns" not registered
	Jul 29 01:51:04 test-preload-609534 kubelet[1154]: E0729 01:51:04.902589    1154 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-65jsw" podUID=3abf5dab-d5f4-415b-8bcf-a373b0480d34
	
	
	==> storage-provisioner [6e200c2e0d4ee10f9bfb583b8c404bad9320e1f4a269238b8677bbfe6fded925] <==
	I0729 01:51:02.163772       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-609534 -n test-preload-609534
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-609534 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-609534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-609534
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-609534: (1.138185317s)
--- FAIL: TestPreload (265.71s)

                                                
                                    
x
+
TestKubernetesUpgrade (1197.22s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-211243 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0729 01:57:23.071280   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-211243 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m3.48650864s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-211243] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-211243" primary control-plane node in "kubernetes-upgrade-211243" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:57:15.178460   57807 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:57:15.178636   57807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:57:15.178650   57807 out.go:304] Setting ErrFile to fd 2...
	I0729 01:57:15.178657   57807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:57:15.178934   57807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:57:15.179751   57807 out.go:298] Setting JSON to false
	I0729 01:57:15.181045   57807 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5981,"bootTime":1722212254,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 01:57:15.181135   57807 start.go:139] virtualization: kvm guest
	I0729 01:57:15.183516   57807 out.go:177] * [kubernetes-upgrade-211243] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 01:57:15.185089   57807 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 01:57:15.185126   57807 notify.go:220] Checking for updates...
	I0729 01:57:15.187809   57807 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 01:57:15.189206   57807 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:57:15.190437   57807 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:57:15.191563   57807 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 01:57:15.192744   57807 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 01:57:15.194432   57807 config.go:182] Loaded profile config "NoKubernetes-703567": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0729 01:57:15.194580   57807 config.go:182] Loaded profile config "cert-expiration-923851": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:57:15.194732   57807 config.go:182] Loaded profile config "pause-112077": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:57:15.194845   57807 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 01:57:15.234331   57807 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 01:57:15.235669   57807 start.go:297] selected driver: kvm2
	I0729 01:57:15.235689   57807 start.go:901] validating driver "kvm2" against <nil>
	I0729 01:57:15.235706   57807 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 01:57:15.236835   57807 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:57:15.236947   57807 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 01:57:15.253097   57807 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 01:57:15.253150   57807 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 01:57:15.253351   57807 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 01:57:15.253374   57807 cni.go:84] Creating CNI manager for ""
	I0729 01:57:15.253382   57807 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 01:57:15.253396   57807 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 01:57:15.253444   57807 start.go:340] cluster config:
	{Name:kubernetes-upgrade-211243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-211243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:57:15.253534   57807 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:57:15.255356   57807 out.go:177] * Starting "kubernetes-upgrade-211243" primary control-plane node in "kubernetes-upgrade-211243" cluster
	I0729 01:57:15.256908   57807 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 01:57:15.256955   57807 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 01:57:15.256963   57807 cache.go:56] Caching tarball of preloaded images
	I0729 01:57:15.257047   57807 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 01:57:15.257059   57807 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 01:57:15.257180   57807 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/config.json ...
	I0729 01:57:15.257203   57807 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/config.json: {Name:mk1b095a84cba46324b30e0485c71aea854a82cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:57:15.257365   57807 start.go:360] acquireMachinesLock for kubernetes-upgrade-211243: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 01:57:47.572004   57807 start.go:364] duration metric: took 32.314600803s to acquireMachinesLock for "kubernetes-upgrade-211243"
	I0729 01:57:47.572093   57807 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-211243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-211243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:57:47.572229   57807 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 01:57:47.575479   57807 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 01:57:47.575646   57807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:57:47.575700   57807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:57:47.593002   57807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45413
	I0729 01:57:47.593510   57807 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:57:47.594137   57807 main.go:141] libmachine: Using API Version  1
	I0729 01:57:47.594174   57807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:57:47.594550   57807 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:57:47.594734   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetMachineName
	I0729 01:57:47.594926   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .DriverName
	I0729 01:57:47.595086   57807 start.go:159] libmachine.API.Create for "kubernetes-upgrade-211243" (driver="kvm2")
	I0729 01:57:47.595116   57807 client.go:168] LocalClient.Create starting
	I0729 01:57:47.595166   57807 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem
	I0729 01:57:47.595204   57807 main.go:141] libmachine: Decoding PEM data...
	I0729 01:57:47.595225   57807 main.go:141] libmachine: Parsing certificate...
	I0729 01:57:47.595301   57807 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem
	I0729 01:57:47.595331   57807 main.go:141] libmachine: Decoding PEM data...
	I0729 01:57:47.595350   57807 main.go:141] libmachine: Parsing certificate...
	I0729 01:57:47.595382   57807 main.go:141] libmachine: Running pre-create checks...
	I0729 01:57:47.595405   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .PreCreateCheck
	I0729 01:57:47.595784   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetConfigRaw
	I0729 01:57:47.596212   57807 main.go:141] libmachine: Creating machine...
	I0729 01:57:47.596230   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .Create
	I0729 01:57:47.596382   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Creating KVM machine...
	I0729 01:57:47.597731   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found existing default KVM network
	I0729 01:57:47.599028   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:47.598853   58224 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:61:72:15} reservation:<nil>}
	I0729 01:57:47.599738   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:47.599663   58224 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:31:75:07} reservation:<nil>}
	I0729 01:57:47.600565   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:47.600470   58224 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002ca4e0}
	I0729 01:57:47.600591   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | created network xml: 
	I0729 01:57:47.600624   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | <network>
	I0729 01:57:47.600651   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG |   <name>mk-kubernetes-upgrade-211243</name>
	I0729 01:57:47.600669   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG |   <dns enable='no'/>
	I0729 01:57:47.600678   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG |   
	I0729 01:57:47.600686   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0729 01:57:47.600694   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG |     <dhcp>
	I0729 01:57:47.600705   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0729 01:57:47.600716   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG |     </dhcp>
	I0729 01:57:47.600725   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG |   </ip>
	I0729 01:57:47.600738   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG |   
	I0729 01:57:47.600751   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | </network>
	I0729 01:57:47.600761   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | 
	I0729 01:57:47.606027   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | trying to create private KVM network mk-kubernetes-upgrade-211243 192.168.61.0/24...
	I0729 01:57:47.675187   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | private KVM network mk-kubernetes-upgrade-211243 192.168.61.0/24 created
	I0729 01:57:47.675241   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:47.675158   58224 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:57:47.675256   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Setting up store path in /home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243 ...
	I0729 01:57:47.675272   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Building disk image from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 01:57:47.675353   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Downloading /home/jenkins/minikube-integration/19312-9421/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 01:57:47.929027   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:47.928822   58224 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243/id_rsa...
	I0729 01:57:48.024498   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:48.024380   58224 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243/kubernetes-upgrade-211243.rawdisk...
	I0729 01:57:48.024530   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | Writing magic tar header
	I0729 01:57:48.024543   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | Writing SSH key tar header
	I0729 01:57:48.024552   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:48.024510   58224 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243 ...
	I0729 01:57:48.024697   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243
	I0729 01:57:48.024719   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines
	I0729 01:57:48.024743   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243 (perms=drwx------)
	I0729 01:57:48.024758   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines (perms=drwxr-xr-x)
	I0729 01:57:48.024769   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube (perms=drwxr-xr-x)
	I0729 01:57:48.024776   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421 (perms=drwxrwxr-x)
	I0729 01:57:48.024788   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 01:57:48.024798   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 01:57:48.024805   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:57:48.024833   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Creating domain...
	I0729 01:57:48.024849   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421
	I0729 01:57:48.024860   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 01:57:48.024868   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | Checking permissions on dir: /home/jenkins
	I0729 01:57:48.024877   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | Checking permissions on dir: /home
	I0729 01:57:48.024883   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | Skipping /home - not owner
	I0729 01:57:48.027177   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) define libvirt domain using xml: 
	I0729 01:57:48.027207   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) <domain type='kvm'>
	I0729 01:57:48.027221   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)   <name>kubernetes-upgrade-211243</name>
	I0729 01:57:48.027243   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)   <memory unit='MiB'>2200</memory>
	I0729 01:57:48.027274   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)   <vcpu>2</vcpu>
	I0729 01:57:48.027297   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)   <features>
	I0729 01:57:48.027310   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     <acpi/>
	I0729 01:57:48.027332   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     <apic/>
	I0729 01:57:48.027350   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     <pae/>
	I0729 01:57:48.027360   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     
	I0729 01:57:48.027373   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)   </features>
	I0729 01:57:48.027388   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)   <cpu mode='host-passthrough'>
	I0729 01:57:48.027400   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)   
	I0729 01:57:48.027410   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)   </cpu>
	I0729 01:57:48.027419   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)   <os>
	I0729 01:57:48.027430   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     <type>hvm</type>
	I0729 01:57:48.027442   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     <boot dev='cdrom'/>
	I0729 01:57:48.027453   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     <boot dev='hd'/>
	I0729 01:57:48.027471   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     <bootmenu enable='no'/>
	I0729 01:57:48.027636   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)   </os>
	I0729 01:57:48.027656   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)   <devices>
	I0729 01:57:48.027664   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     <disk type='file' device='cdrom'>
	I0729 01:57:48.027688   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243/boot2docker.iso'/>
	I0729 01:57:48.027706   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)       <target dev='hdc' bus='scsi'/>
	I0729 01:57:48.027719   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)       <readonly/>
	I0729 01:57:48.027726   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     </disk>
	I0729 01:57:48.027739   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     <disk type='file' device='disk'>
	I0729 01:57:48.027751   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 01:57:48.027766   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243/kubernetes-upgrade-211243.rawdisk'/>
	I0729 01:57:48.027778   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)       <target dev='hda' bus='virtio'/>
	I0729 01:57:48.027789   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     </disk>
	I0729 01:57:48.027804   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     <interface type='network'>
	I0729 01:57:48.027816   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)       <source network='mk-kubernetes-upgrade-211243'/>
	I0729 01:57:48.027823   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)       <model type='virtio'/>
	I0729 01:57:48.027835   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     </interface>
	I0729 01:57:48.027845   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     <interface type='network'>
	I0729 01:57:48.027857   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)       <source network='default'/>
	I0729 01:57:48.027865   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)       <model type='virtio'/>
	I0729 01:57:48.027877   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     </interface>
	I0729 01:57:48.027891   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     <serial type='pty'>
	I0729 01:57:48.027922   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)       <target port='0'/>
	I0729 01:57:48.027949   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     </serial>
	I0729 01:57:48.027965   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     <console type='pty'>
	I0729 01:57:48.027989   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)       <target type='serial' port='0'/>
	I0729 01:57:48.028003   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     </console>
	I0729 01:57:48.028022   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     <rng model='virtio'>
	I0729 01:57:48.028033   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)       <backend model='random'>/dev/random</backend>
	I0729 01:57:48.028051   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     </rng>
	I0729 01:57:48.028065   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     
	I0729 01:57:48.028077   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)     
	I0729 01:57:48.028105   57807 main.go:141] libmachine: (kubernetes-upgrade-211243)   </devices>
	I0729 01:57:48.028125   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) </domain>
	I0729 01:57:48.028140   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) 
	I0729 01:57:48.032204   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:13:67:17 in network default
	I0729 01:57:48.032733   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Ensuring networks are active...
	I0729 01:57:48.032752   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:57:48.033402   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Ensuring network default is active
	I0729 01:57:48.033717   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Ensuring network mk-kubernetes-upgrade-211243 is active
	I0729 01:57:48.034271   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Getting domain xml...
	I0729 01:57:48.035120   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Creating domain...
	I0729 01:57:49.352485   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Waiting to get IP...
	I0729 01:57:49.353640   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:57:49.354333   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | unable to find current IP address of domain kubernetes-upgrade-211243 in network mk-kubernetes-upgrade-211243
	I0729 01:57:49.354404   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:49.354330   58224 retry.go:31] will retry after 195.145641ms: waiting for machine to come up
	I0729 01:57:49.550742   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:57:49.551373   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | unable to find current IP address of domain kubernetes-upgrade-211243 in network mk-kubernetes-upgrade-211243
	I0729 01:57:49.551408   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:49.551318   58224 retry.go:31] will retry after 284.107779ms: waiting for machine to come up
	I0729 01:57:49.836971   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:57:49.837404   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | unable to find current IP address of domain kubernetes-upgrade-211243 in network mk-kubernetes-upgrade-211243
	I0729 01:57:49.837434   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:49.837369   58224 retry.go:31] will retry after 448.272306ms: waiting for machine to come up
	I0729 01:57:50.286991   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:57:50.288380   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | unable to find current IP address of domain kubernetes-upgrade-211243 in network mk-kubernetes-upgrade-211243
	I0729 01:57:50.288414   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:50.288332   58224 retry.go:31] will retry after 401.993253ms: waiting for machine to come up
	I0729 01:57:50.691998   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:57:50.692458   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | unable to find current IP address of domain kubernetes-upgrade-211243 in network mk-kubernetes-upgrade-211243
	I0729 01:57:50.692507   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:50.692434   58224 retry.go:31] will retry after 674.432988ms: waiting for machine to come up
	I0729 01:57:51.368318   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:57:51.368814   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | unable to find current IP address of domain kubernetes-upgrade-211243 in network mk-kubernetes-upgrade-211243
	I0729 01:57:51.368842   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:51.368765   58224 retry.go:31] will retry after 742.194733ms: waiting for machine to come up
	I0729 01:57:52.112876   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:57:52.113498   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | unable to find current IP address of domain kubernetes-upgrade-211243 in network mk-kubernetes-upgrade-211243
	I0729 01:57:52.113527   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:52.113450   58224 retry.go:31] will retry after 738.719026ms: waiting for machine to come up
	I0729 01:57:52.854014   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:57:52.854538   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | unable to find current IP address of domain kubernetes-upgrade-211243 in network mk-kubernetes-upgrade-211243
	I0729 01:57:52.854566   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:52.854485   58224 retry.go:31] will retry after 981.165293ms: waiting for machine to come up
	I0729 01:57:53.837177   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:57:53.837671   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | unable to find current IP address of domain kubernetes-upgrade-211243 in network mk-kubernetes-upgrade-211243
	I0729 01:57:53.837704   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:53.837632   58224 retry.go:31] will retry after 1.644752346s: waiting for machine to come up
	I0729 01:57:55.484137   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:57:55.484560   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | unable to find current IP address of domain kubernetes-upgrade-211243 in network mk-kubernetes-upgrade-211243
	I0729 01:57:55.484590   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:55.484501   58224 retry.go:31] will retry after 2.102264466s: waiting for machine to come up
	I0729 01:57:57.588045   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:57:57.588562   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | unable to find current IP address of domain kubernetes-upgrade-211243 in network mk-kubernetes-upgrade-211243
	I0729 01:57:57.588592   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:57:57.588501   58224 retry.go:31] will retry after 2.5080605s: waiting for machine to come up
	I0729 01:58:00.099794   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:00.100299   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | unable to find current IP address of domain kubernetes-upgrade-211243 in network mk-kubernetes-upgrade-211243
	I0729 01:58:00.100326   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:58:00.100251   58224 retry.go:31] will retry after 2.230123082s: waiting for machine to come up
	I0729 01:58:02.333559   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:02.334007   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | unable to find current IP address of domain kubernetes-upgrade-211243 in network mk-kubernetes-upgrade-211243
	I0729 01:58:02.334092   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:58:02.333956   58224 retry.go:31] will retry after 3.259349329s: waiting for machine to come up
	I0729 01:58:05.596667   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:05.597152   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | unable to find current IP address of domain kubernetes-upgrade-211243 in network mk-kubernetes-upgrade-211243
	I0729 01:58:05.597173   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | I0729 01:58:05.597113   58224 retry.go:31] will retry after 4.336200822s: waiting for machine to come up
	I0729 01:58:09.936047   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:09.936568   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Found IP for machine: 192.168.61.63
	I0729 01:58:09.936601   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has current primary IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:09.936611   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Reserving static IP address...
	I0729 01:58:09.936961   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-211243", mac: "52:54:00:ce:2d:1a", ip: "192.168.61.63"} in network mk-kubernetes-upgrade-211243
	I0729 01:58:10.011531   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | Getting to WaitForSSH function...
	I0729 01:58:10.011564   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Reserved static IP address: 192.168.61.63
	I0729 01:58:10.011614   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Waiting for SSH to be available...
	I0729 01:58:10.014234   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:10.014666   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 02:58:02 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ce:2d:1a}
	I0729 01:58:10.014698   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:10.014821   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | Using SSH client type: external
	I0729 01:58:10.014848   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243/id_rsa (-rw-------)
	I0729 01:58:10.014937   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 01:58:10.014967   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | About to run SSH command:
	I0729 01:58:10.015000   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | exit 0
	I0729 01:58:10.139129   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | SSH cmd err, output: <nil>: 
	I0729 01:58:10.139440   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) KVM machine creation complete!
	I0729 01:58:10.139812   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetConfigRaw
	I0729 01:58:10.140374   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .DriverName
	I0729 01:58:10.140591   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .DriverName
	I0729 01:58:10.140748   57807 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 01:58:10.140760   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetState
	I0729 01:58:10.142272   57807 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 01:58:10.142287   57807 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 01:58:10.142292   57807 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 01:58:10.142298   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 01:58:10.145276   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:10.145739   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 02:58:02 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 01:58:10.145771   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:10.145922   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 01:58:10.146136   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 01:58:10.146305   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 01:58:10.146410   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 01:58:10.146625   57807 main.go:141] libmachine: Using SSH client type: native
	I0729 01:58:10.146887   57807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0729 01:58:10.146901   57807 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 01:58:10.250568   57807 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:58:10.250597   57807 main.go:141] libmachine: Detecting the provisioner...
	I0729 01:58:10.250608   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 01:58:10.253473   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:10.253836   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 02:58:02 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 01:58:10.253867   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:10.254062   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 01:58:10.254274   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 01:58:10.254472   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 01:58:10.254572   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 01:58:10.254712   57807 main.go:141] libmachine: Using SSH client type: native
	I0729 01:58:10.254928   57807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0729 01:58:10.254942   57807 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 01:58:10.364189   57807 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 01:58:10.364284   57807 main.go:141] libmachine: found compatible host: buildroot
	I0729 01:58:10.364293   57807 main.go:141] libmachine: Provisioning with buildroot...
	I0729 01:58:10.364301   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetMachineName
	I0729 01:58:10.364576   57807 buildroot.go:166] provisioning hostname "kubernetes-upgrade-211243"
	I0729 01:58:10.364602   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetMachineName
	I0729 01:58:10.364790   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 01:58:10.367681   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:10.368064   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 02:58:02 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 01:58:10.368089   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:10.368243   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 01:58:10.368415   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 01:58:10.368569   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 01:58:10.368716   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 01:58:10.368893   57807 main.go:141] libmachine: Using SSH client type: native
	I0729 01:58:10.369062   57807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0729 01:58:10.369076   57807 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-211243 && echo "kubernetes-upgrade-211243" | sudo tee /etc/hostname
	I0729 01:58:10.491399   57807 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-211243
	
	I0729 01:58:10.491434   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 01:58:10.494467   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:10.494795   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 02:58:02 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 01:58:10.494819   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:10.495131   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 01:58:10.495349   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 01:58:10.495528   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 01:58:10.495689   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 01:58:10.495890   57807 main.go:141] libmachine: Using SSH client type: native
	I0729 01:58:10.496106   57807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0729 01:58:10.496128   57807 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-211243' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-211243/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-211243' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 01:58:10.608840   57807 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:58:10.608874   57807 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 01:58:10.608927   57807 buildroot.go:174] setting up certificates
	I0729 01:58:10.608942   57807 provision.go:84] configureAuth start
	I0729 01:58:10.608960   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetMachineName
	I0729 01:58:10.609270   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetIP
	I0729 01:58:10.611902   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:10.612316   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 02:58:02 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 01:58:10.612345   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:10.612469   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 01:58:10.614686   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:10.615030   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 02:58:02 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 01:58:10.615074   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:10.615207   57807 provision.go:143] copyHostCerts
	I0729 01:58:10.615263   57807 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 01:58:10.615278   57807 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:58:10.615333   57807 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 01:58:10.615423   57807 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 01:58:10.615433   57807 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:58:10.615454   57807 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 01:58:10.615506   57807 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 01:58:10.615513   57807 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:58:10.615531   57807 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 01:58:10.615574   57807 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-211243 san=[127.0.0.1 192.168.61.63 kubernetes-upgrade-211243 localhost minikube]
	I0729 01:58:10.825410   57807 provision.go:177] copyRemoteCerts
	I0729 01:58:10.825459   57807 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 01:58:10.825488   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 01:58:10.828648   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:10.829079   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 02:58:02 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 01:58:10.829107   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:10.829367   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 01:58:10.829577   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 01:58:10.829824   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 01:58:10.830000   57807 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243/id_rsa Username:docker}
	I0729 01:58:10.914274   57807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 01:58:10.942362   57807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0729 01:58:10.969548   57807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 01:58:10.996894   57807 provision.go:87] duration metric: took 387.93631ms to configureAuth
	I0729 01:58:10.996923   57807 buildroot.go:189] setting minikube options for container-runtime
	I0729 01:58:10.997076   57807 config.go:182] Loaded profile config "kubernetes-upgrade-211243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 01:58:10.997156   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 01:58:10.999830   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:11.000141   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 02:58:02 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 01:58:11.000190   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:11.000303   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 01:58:11.000549   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 01:58:11.000707   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 01:58:11.000902   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 01:58:11.001089   57807 main.go:141] libmachine: Using SSH client type: native
	I0729 01:58:11.001263   57807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0729 01:58:11.001284   57807 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 01:58:11.279691   57807 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 01:58:11.279717   57807 main.go:141] libmachine: Checking connection to Docker...
	I0729 01:58:11.279726   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetURL
	I0729 01:58:11.281026   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | Using libvirt version 6000000
	I0729 01:58:11.283729   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:11.284105   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 02:58:02 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 01:58:11.284133   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:11.284280   57807 main.go:141] libmachine: Docker is up and running!
	I0729 01:58:11.284296   57807 main.go:141] libmachine: Reticulating splines...
	I0729 01:58:11.284303   57807 client.go:171] duration metric: took 23.689176154s to LocalClient.Create
	I0729 01:58:11.284326   57807 start.go:167] duration metric: took 23.689242458s to libmachine.API.Create "kubernetes-upgrade-211243"
	I0729 01:58:11.284336   57807 start.go:293] postStartSetup for "kubernetes-upgrade-211243" (driver="kvm2")
	I0729 01:58:11.284344   57807 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 01:58:11.284360   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .DriverName
	I0729 01:58:11.284606   57807 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 01:58:11.284635   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 01:58:11.286916   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:11.287322   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 02:58:02 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 01:58:11.287348   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:11.287562   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 01:58:11.287767   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 01:58:11.287942   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 01:58:11.288097   57807 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243/id_rsa Username:docker}
	I0729 01:58:11.370886   57807 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 01:58:11.375467   57807 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 01:58:11.375497   57807 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 01:58:11.375557   57807 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 01:58:11.375628   57807 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 01:58:11.375720   57807 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 01:58:11.386564   57807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:58:11.411642   57807 start.go:296] duration metric: took 127.293255ms for postStartSetup
	I0729 01:58:11.411699   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetConfigRaw
	I0729 01:58:11.412298   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetIP
	I0729 01:58:11.415019   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:11.415428   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 02:58:02 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 01:58:11.415459   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:11.415668   57807 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/config.json ...
	I0729 01:58:11.415911   57807 start.go:128] duration metric: took 23.843668563s to createHost
	I0729 01:58:11.415933   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 01:58:11.418030   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:11.418362   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 02:58:02 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 01:58:11.418389   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:11.418488   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 01:58:11.418684   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 01:58:11.418839   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 01:58:11.418949   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 01:58:11.419124   57807 main.go:141] libmachine: Using SSH client type: native
	I0729 01:58:11.419272   57807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0729 01:58:11.419282   57807 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 01:58:11.520285   57807 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722218291.493538453
	
	I0729 01:58:11.520307   57807 fix.go:216] guest clock: 1722218291.493538453
	I0729 01:58:11.520313   57807 fix.go:229] Guest: 2024-07-29 01:58:11.493538453 +0000 UTC Remote: 2024-07-29 01:58:11.415923709 +0000 UTC m=+56.285024446 (delta=77.614744ms)
	I0729 01:58:11.520333   57807 fix.go:200] guest clock delta is within tolerance: 77.614744ms
	I0729 01:58:11.520338   57807 start.go:83] releasing machines lock for "kubernetes-upgrade-211243", held for 23.948304025s
	I0729 01:58:11.520363   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .DriverName
	I0729 01:58:11.520623   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetIP
	I0729 01:58:11.523876   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:11.524261   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 02:58:02 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 01:58:11.524298   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:11.524558   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .DriverName
	I0729 01:58:11.525253   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .DriverName
	I0729 01:58:11.525476   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .DriverName
	I0729 01:58:11.525544   57807 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 01:58:11.525599   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 01:58:11.525743   57807 ssh_runner.go:195] Run: cat /version.json
	I0729 01:58:11.525770   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 01:58:11.528536   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:11.528915   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 02:58:02 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 01:58:11.528962   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:11.529031   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:11.529092   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 01:58:11.529272   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 01:58:11.529347   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 02:58:02 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 01:58:11.529379   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:11.529421   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 01:58:11.529564   57807 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243/id_rsa Username:docker}
	I0729 01:58:11.529580   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 01:58:11.529802   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 01:58:11.529971   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 01:58:11.530111   57807 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243/id_rsa Username:docker}
	I0729 01:58:11.629499   57807 ssh_runner.go:195] Run: systemctl --version
	I0729 01:58:11.636867   57807 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 01:58:11.809150   57807 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 01:58:11.816275   57807 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 01:58:11.816436   57807 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 01:58:11.836529   57807 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 01:58:11.836559   57807 start.go:495] detecting cgroup driver to use...
	I0729 01:58:11.836635   57807 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 01:58:11.854194   57807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 01:58:11.870551   57807 docker.go:217] disabling cri-docker service (if available) ...
	I0729 01:58:11.870621   57807 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 01:58:11.886150   57807 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 01:58:11.902578   57807 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 01:58:12.018898   57807 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 01:58:12.193447   57807 docker.go:233] disabling docker service ...
	I0729 01:58:12.193521   57807 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 01:58:12.209274   57807 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 01:58:12.224748   57807 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 01:58:12.389213   57807 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 01:58:12.539571   57807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 01:58:12.557081   57807 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 01:58:12.580381   57807 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 01:58:12.580457   57807 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:58:12.594754   57807 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 01:58:12.594834   57807 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:58:12.606869   57807 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:58:12.618039   57807 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:58:12.629030   57807 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 01:58:12.640743   57807 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 01:58:12.650931   57807 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 01:58:12.650997   57807 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 01:58:12.667124   57807 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 01:58:12.680331   57807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:58:12.823342   57807 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 01:58:12.985163   57807 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 01:58:12.985243   57807 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 01:58:12.990287   57807 start.go:563] Will wait 60s for crictl version
	I0729 01:58:12.990366   57807 ssh_runner.go:195] Run: which crictl
	I0729 01:58:12.994837   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 01:58:13.035803   57807 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 01:58:13.035892   57807 ssh_runner.go:195] Run: crio --version
	I0729 01:58:13.069202   57807 ssh_runner.go:195] Run: crio --version
	I0729 01:58:13.109655   57807 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 01:58:13.110826   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetIP
	I0729 01:58:13.113706   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:13.114069   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 02:58:02 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 01:58:13.114100   57807 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 01:58:13.114282   57807 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 01:58:13.118698   57807 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 01:58:13.131965   57807 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-211243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-211243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.63 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 01:58:13.132054   57807 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 01:58:13.132092   57807 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:58:13.167752   57807 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 01:58:13.167828   57807 ssh_runner.go:195] Run: which lz4
	I0729 01:58:13.172616   57807 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 01:58:13.177701   57807 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 01:58:13.177731   57807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 01:58:14.978557   57807 crio.go:462] duration metric: took 1.805983253s to copy over tarball
	I0729 01:58:14.978670   57807 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 01:58:17.836075   57807 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.857370339s)
	I0729 01:58:17.836110   57807 crio.go:469] duration metric: took 2.857522026s to extract the tarball
	I0729 01:58:17.836119   57807 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 01:58:17.882432   57807 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:58:17.947185   57807 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 01:58:17.947258   57807 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 01:58:17.947354   57807 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 01:58:17.947351   57807 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 01:58:17.947402   57807 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 01:58:17.947438   57807 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 01:58:17.947469   57807 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 01:58:17.947636   57807 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0729 01:58:17.947641   57807 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 01:58:17.947736   57807 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 01:58:17.949296   57807 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 01:58:17.949329   57807 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 01:58:17.949446   57807 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 01:58:17.949747   57807 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 01:58:17.949765   57807 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 01:58:17.949835   57807 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 01:58:17.950059   57807 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 01:58:17.950324   57807 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 01:58:18.131384   57807 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 01:58:18.132241   57807 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 01:58:18.142328   57807 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 01:58:18.148409   57807 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 01:58:18.154352   57807 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 01:58:18.195418   57807 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 01:58:18.273209   57807 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 01:58:18.273232   57807 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 01:58:18.273262   57807 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 01:58:18.273262   57807 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 01:58:18.273306   57807 ssh_runner.go:195] Run: which crictl
	I0729 01:58:18.273307   57807 ssh_runner.go:195] Run: which crictl
	I0729 01:58:18.273355   57807 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 01:58:18.273480   57807 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 01:58:18.273526   57807 ssh_runner.go:195] Run: which crictl
	I0729 01:58:18.307516   57807 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 01:58:18.307561   57807 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 01:58:18.307606   57807 ssh_runner.go:195] Run: which crictl
	I0729 01:58:18.308857   57807 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 01:58:18.308899   57807 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 01:58:18.308942   57807 ssh_runner.go:195] Run: which crictl
	I0729 01:58:18.319752   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 01:58:18.319783   57807 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 01:58:18.319811   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 01:58:18.319813   57807 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 01:58:18.319818   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 01:58:18.319770   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 01:58:18.319849   57807 ssh_runner.go:195] Run: which crictl
	I0729 01:58:18.319868   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 01:58:18.370853   57807 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 01:58:18.474081   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 01:58:18.474144   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 01:58:18.474212   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 01:58:18.474216   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 01:58:18.474283   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 01:58:18.474322   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 01:58:18.474448   57807 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 01:58:18.474479   57807 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 01:58:18.474508   57807 ssh_runner.go:195] Run: which crictl
	I0729 01:58:18.610983   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 01:58:18.611090   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 01:58:18.611118   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 01:58:18.611185   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 01:58:18.611229   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 01:58:18.611289   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 01:58:18.611293   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 01:58:18.712567   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 01:58:18.799670   57807 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 01:58:18.799752   57807 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 01:58:18.799752   57807 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 01:58:18.799809   57807 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 01:58:18.799874   57807 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 01:58:18.799919   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 01:58:18.830015   57807 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 01:58:18.853115   57807 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 01:58:18.889627   57807 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 01:58:18.904872   57807 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 01:58:19.066188   57807 cache_images.go:92] duration metric: took 1.118908916s to LoadCachedImages
	W0729 01:58:19.066381   57807 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0729 01:58:19.066558   57807 kubeadm.go:934] updating node { 192.168.61.63 8443 v1.20.0 crio true true} ...
	I0729 01:58:19.066711   57807 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-211243 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-211243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 01:58:19.066798   57807 ssh_runner.go:195] Run: crio config
	I0729 01:58:19.139350   57807 cni.go:84] Creating CNI manager for ""
	I0729 01:58:19.139379   57807 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 01:58:19.139388   57807 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 01:58:19.139405   57807 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.63 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-211243 NodeName:kubernetes-upgrade-211243 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 01:58:19.139529   57807 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-211243"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 01:58:19.139583   57807 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 01:58:19.154082   57807 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 01:58:19.154154   57807 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 01:58:19.173192   57807 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0729 01:58:19.194014   57807 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 01:58:19.213808   57807 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0729 01:58:19.235682   57807 ssh_runner.go:195] Run: grep 192.168.61.63	control-plane.minikube.internal$ /etc/hosts
	I0729 01:58:19.241374   57807 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 01:58:19.257264   57807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:58:19.394122   57807 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:58:19.413314   57807 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243 for IP: 192.168.61.63
	I0729 01:58:19.413343   57807 certs.go:194] generating shared ca certs ...
	I0729 01:58:19.413362   57807 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:58:19.413541   57807 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 01:58:19.413593   57807 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 01:58:19.413605   57807 certs.go:256] generating profile certs ...
	I0729 01:58:19.413680   57807 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/client.key
	I0729 01:58:19.413702   57807 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/client.crt with IP's: []
	I0729 01:58:19.520101   57807 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/client.crt ...
	I0729 01:58:19.520129   57807 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/client.crt: {Name:mke98a77dd93b5b3435a3349ad715b720e5b1d94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:58:19.520325   57807 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/client.key ...
	I0729 01:58:19.520343   57807 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/client.key: {Name:mkbf965dfdd06a11102ffa773dfa1b7ff8e70c47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:58:19.520446   57807 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/apiserver.key.ac452c37
	I0729 01:58:19.520464   57807 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/apiserver.crt.ac452c37 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.63]
	I0729 01:58:19.609684   57807 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/apiserver.crt.ac452c37 ...
	I0729 01:58:19.609711   57807 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/apiserver.crt.ac452c37: {Name:mk554828f8fe932183d81b3c33b9545f4ba3e8d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:58:19.609918   57807 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/apiserver.key.ac452c37 ...
	I0729 01:58:19.609936   57807 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/apiserver.key.ac452c37: {Name:mkd40740a42e987e142c7917629dde1ec8a4c3c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:58:19.610019   57807 certs.go:381] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/apiserver.crt.ac452c37 -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/apiserver.crt
	I0729 01:58:19.610086   57807 certs.go:385] copying /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/apiserver.key.ac452c37 -> /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/apiserver.key
	I0729 01:58:19.610137   57807 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/proxy-client.key
	I0729 01:58:19.610157   57807 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/proxy-client.crt with IP's: []
	I0729 01:58:19.841989   57807 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/proxy-client.crt ...
	I0729 01:58:19.842023   57807 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/proxy-client.crt: {Name:mkfe452c0a0627f3f4ded26dc9d44f411ab5e76b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:58:19.844028   57807 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/proxy-client.key ...
	I0729 01:58:19.844052   57807 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/proxy-client.key: {Name:mk416c0db8694a7fb23c80595d61c6122e4adf88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:58:19.844295   57807 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 01:58:19.844347   57807 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 01:58:19.844360   57807 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 01:58:19.844402   57807 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 01:58:19.844432   57807 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 01:58:19.844459   57807 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 01:58:19.844506   57807 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:58:19.845879   57807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 01:58:19.875821   57807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 01:58:19.905056   57807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 01:58:19.932666   57807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 01:58:19.959327   57807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 01:58:19.986029   57807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 01:58:20.011363   57807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 01:58:20.116910   57807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 01:58:20.146081   57807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 01:58:20.177675   57807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 01:58:20.209015   57807 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 01:58:20.241307   57807 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 01:58:20.273748   57807 ssh_runner.go:195] Run: openssl version
	I0729 01:58:20.282495   57807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 01:58:20.303614   57807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 01:58:20.317288   57807 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 01:58:20.317378   57807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 01:58:20.329307   57807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 01:58:20.353872   57807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 01:58:20.373242   57807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:58:20.387445   57807 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:58:20.387519   57807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:58:20.395487   57807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 01:58:20.410011   57807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 01:58:20.422687   57807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 01:58:20.429974   57807 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 01:58:20.430032   57807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 01:58:20.438362   57807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 01:58:20.455047   57807 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 01:58:20.461739   57807 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 01:58:20.461792   57807 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-211243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-211243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.63 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:58:20.461909   57807 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 01:58:20.461967   57807 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 01:58:20.512736   57807 cri.go:89] found id: ""
	I0729 01:58:20.512904   57807 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 01:58:20.527236   57807 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 01:58:20.541798   57807 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 01:58:20.553277   57807 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 01:58:20.553343   57807 kubeadm.go:157] found existing configuration files:
	
	I0729 01:58:20.553398   57807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 01:58:20.563689   57807 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 01:58:20.563757   57807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 01:58:20.574578   57807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 01:58:20.584892   57807 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 01:58:20.584967   57807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 01:58:20.595881   57807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 01:58:20.607145   57807 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 01:58:20.607215   57807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 01:58:20.617943   57807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 01:58:20.628288   57807 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 01:58:20.628360   57807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 01:58:20.640520   57807 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 01:58:20.806628   57807 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 01:58:20.806715   57807 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 01:58:20.969307   57807 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 01:58:20.969580   57807 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 01:58:20.969788   57807 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 01:58:21.189474   57807 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 01:58:21.192216   57807 out.go:204]   - Generating certificates and keys ...
	I0729 01:58:21.192355   57807 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 01:58:21.192467   57807 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 01:58:21.382692   57807 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 01:58:21.716289   57807 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 01:58:22.056684   57807 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 01:58:22.169524   57807 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 01:58:22.413043   57807 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 01:58:22.413268   57807 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-211243 localhost] and IPs [192.168.61.63 127.0.0.1 ::1]
	I0729 01:58:22.602734   57807 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 01:58:22.603212   57807 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-211243 localhost] and IPs [192.168.61.63 127.0.0.1 ::1]
	I0729 01:58:22.729051   57807 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 01:58:22.898353   57807 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 01:58:23.219202   57807 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 01:58:23.220684   57807 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 01:58:23.331996   57807 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 01:58:23.808207   57807 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 01:58:23.953111   57807 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 01:58:24.049688   57807 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 01:58:24.078037   57807 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 01:58:24.079385   57807 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 01:58:24.079447   57807 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 01:58:24.213777   57807 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 01:58:24.215567   57807 out.go:204]   - Booting up control plane ...
	I0729 01:58:24.215700   57807 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 01:58:24.221171   57807 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 01:58:24.222354   57807 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 01:58:24.223218   57807 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 01:58:24.229572   57807 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 01:59:04.223211   57807 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 01:59:04.223845   57807 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 01:59:04.224074   57807 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 01:59:09.224611   57807 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 01:59:09.224876   57807 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 01:59:19.224073   57807 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 01:59:19.224312   57807 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 01:59:39.223984   57807 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 01:59:39.224322   57807 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 02:00:19.225877   57807 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 02:00:19.226135   57807 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 02:00:19.226151   57807 kubeadm.go:310] 
	I0729 02:00:19.226226   57807 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 02:00:19.226296   57807 kubeadm.go:310] 		timed out waiting for the condition
	I0729 02:00:19.226306   57807 kubeadm.go:310] 
	I0729 02:00:19.226361   57807 kubeadm.go:310] 	This error is likely caused by:
	I0729 02:00:19.226409   57807 kubeadm.go:310] 		- The kubelet is not running
	I0729 02:00:19.226568   57807 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 02:00:19.226578   57807 kubeadm.go:310] 
	I0729 02:00:19.226738   57807 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 02:00:19.226799   57807 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 02:00:19.226861   57807 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 02:00:19.226881   57807 kubeadm.go:310] 
	I0729 02:00:19.227049   57807 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 02:00:19.227200   57807 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 02:00:19.227212   57807 kubeadm.go:310] 
	I0729 02:00:19.227355   57807 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 02:00:19.227487   57807 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 02:00:19.227598   57807 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 02:00:19.227702   57807 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 02:00:19.227714   57807 kubeadm.go:310] 
	I0729 02:00:19.228410   57807 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 02:00:19.228537   57807 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 02:00:19.228693   57807 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 02:00:19.228779   57807 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-211243 localhost] and IPs [192.168.61.63 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-211243 localhost] and IPs [192.168.61.63 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-211243 localhost] and IPs [192.168.61.63 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-211243 localhost] and IPs [192.168.61.63 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 02:00:19.228836   57807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 02:00:21.395768   57807 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.166903992s)
	I0729 02:00:21.395869   57807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 02:00:21.409499   57807 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 02:00:21.418742   57807 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 02:00:21.418768   57807 kubeadm.go:157] found existing configuration files:
	
	I0729 02:00:21.418812   57807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 02:00:21.428332   57807 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 02:00:21.428386   57807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 02:00:21.437115   57807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 02:00:21.445456   57807 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 02:00:21.445502   57807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 02:00:21.454446   57807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 02:00:21.463398   57807 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 02:00:21.463451   57807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 02:00:21.472495   57807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 02:00:21.481002   57807 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 02:00:21.481063   57807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 02:00:21.489992   57807 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 02:00:21.554822   57807 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 02:00:21.554912   57807 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 02:00:21.698153   57807 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 02:00:21.698332   57807 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 02:00:21.698490   57807 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 02:00:21.874669   57807 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 02:00:21.876773   57807 out.go:204]   - Generating certificates and keys ...
	I0729 02:00:21.876872   57807 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 02:00:21.876953   57807 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 02:00:21.877047   57807 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 02:00:21.877102   57807 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 02:00:21.877192   57807 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 02:00:21.877288   57807 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 02:00:21.877407   57807 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 02:00:21.878233   57807 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 02:00:21.878937   57807 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 02:00:21.879788   57807 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 02:00:21.880001   57807 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 02:00:21.880102   57807 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 02:00:22.294702   57807 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 02:00:22.361849   57807 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 02:00:22.458684   57807 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 02:00:22.769479   57807 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 02:00:22.784221   57807 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 02:00:22.786382   57807 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 02:00:22.786427   57807 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 02:00:22.926822   57807 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 02:00:22.928906   57807 out.go:204]   - Booting up control plane ...
	I0729 02:00:22.929017   57807 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 02:00:22.937873   57807 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 02:00:22.938785   57807 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 02:00:22.939533   57807 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 02:00:22.941586   57807 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 02:01:02.944519   57807 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 02:01:02.944646   57807 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 02:01:02.944902   57807 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 02:01:07.946092   57807 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 02:01:07.946363   57807 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 02:01:17.946978   57807 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 02:01:17.947293   57807 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 02:01:37.946513   57807 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 02:01:37.946725   57807 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 02:02:17.946431   57807 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 02:02:17.946738   57807 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 02:02:17.946762   57807 kubeadm.go:310] 
	I0729 02:02:17.946801   57807 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 02:02:17.946833   57807 kubeadm.go:310] 		timed out waiting for the condition
	I0729 02:02:17.946839   57807 kubeadm.go:310] 
	I0729 02:02:17.946879   57807 kubeadm.go:310] 	This error is likely caused by:
	I0729 02:02:17.946916   57807 kubeadm.go:310] 		- The kubelet is not running
	I0729 02:02:17.947014   57807 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 02:02:17.947022   57807 kubeadm.go:310] 
	I0729 02:02:17.947137   57807 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 02:02:17.947204   57807 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 02:02:17.947246   57807 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 02:02:17.947259   57807 kubeadm.go:310] 
	I0729 02:02:17.947392   57807 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 02:02:17.947499   57807 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 02:02:17.947535   57807 kubeadm.go:310] 
	I0729 02:02:17.947711   57807 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 02:02:17.947844   57807 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 02:02:17.947938   57807 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 02:02:17.948041   57807 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 02:02:17.948053   57807 kubeadm.go:310] 
	I0729 02:02:17.948641   57807 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 02:02:17.948761   57807 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 02:02:17.948918   57807 kubeadm.go:394] duration metric: took 3m57.487129133s to StartCluster
	I0729 02:02:17.948956   57807 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 02:02:17.949003   57807 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:02:17.949068   57807 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:02:17.995253   57807 cri.go:89] found id: ""
	I0729 02:02:17.995286   57807 logs.go:276] 0 containers: []
	W0729 02:02:17.995297   57807 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:02:17.995304   57807 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:02:17.995370   57807 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:02:18.030422   57807 cri.go:89] found id: ""
	I0729 02:02:18.030453   57807 logs.go:276] 0 containers: []
	W0729 02:02:18.030463   57807 logs.go:278] No container was found matching "etcd"
	I0729 02:02:18.030470   57807 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:02:18.030531   57807 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:02:18.066141   57807 cri.go:89] found id: ""
	I0729 02:02:18.066173   57807 logs.go:276] 0 containers: []
	W0729 02:02:18.066185   57807 logs.go:278] No container was found matching "coredns"
	I0729 02:02:18.066192   57807 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:02:18.066252   57807 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:02:18.105488   57807 cri.go:89] found id: ""
	I0729 02:02:18.105523   57807 logs.go:276] 0 containers: []
	W0729 02:02:18.105530   57807 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:02:18.105536   57807 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:02:18.105605   57807 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:02:18.142057   57807 cri.go:89] found id: ""
	I0729 02:02:18.142085   57807 logs.go:276] 0 containers: []
	W0729 02:02:18.142100   57807 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:02:18.142107   57807 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:02:18.142173   57807 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:02:18.179517   57807 cri.go:89] found id: ""
	I0729 02:02:18.179550   57807 logs.go:276] 0 containers: []
	W0729 02:02:18.179560   57807 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:02:18.179568   57807 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:02:18.179630   57807 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:02:18.216021   57807 cri.go:89] found id: ""
	I0729 02:02:18.216059   57807 logs.go:276] 0 containers: []
	W0729 02:02:18.216072   57807 logs.go:278] No container was found matching "kindnet"
	I0729 02:02:18.216085   57807 logs.go:123] Gathering logs for kubelet ...
	I0729 02:02:18.216102   57807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:02:18.280221   57807 logs.go:123] Gathering logs for dmesg ...
	I0729 02:02:18.280258   57807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:02:18.296545   57807 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:02:18.296574   57807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:02:18.423621   57807 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:02:18.423646   57807 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:02:18.423659   57807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:02:18.555819   57807 logs.go:123] Gathering logs for container status ...
	I0729 02:02:18.555857   57807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 02:02:18.601904   57807 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 02:02:18.601944   57807 out.go:239] * 
	* 
	W0729 02:02:18.601991   57807 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 02:02:18.602011   57807 out.go:239] * 
	* 
	W0729 02:02:18.602953   57807 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 02:02:18.606266   57807 out.go:177] 
	W0729 02:02:18.607555   57807 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 02:02:18.607646   57807 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 02:02:18.607681   57807 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 02:02:18.609401   57807 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-211243 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-211243
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-211243: (2.297497137s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-211243 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-211243 status --format={{.Host}}: exit status 7 (65.831346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-211243 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0729 02:02:23.071239   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-211243 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.405968439s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-211243 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-211243 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-211243 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (95.005051ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-211243] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-211243
	    minikube start -p kubernetes-upgrade-211243 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2112432 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-211243 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-211243 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-211243 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (14m1.712360947s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-211243] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "kubernetes-upgrade-211243" primary control-plane node in "kubernetes-upgrade-211243" cluster
	* Updating the running kvm2 "kubernetes-upgrade-211243" VM ...
	* Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 02:03:07.652400   67284 out.go:291] Setting OutFile to fd 1 ...
	I0729 02:03:07.652553   67284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 02:03:07.652584   67284 out.go:304] Setting ErrFile to fd 2...
	I0729 02:03:07.652601   67284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 02:03:07.652826   67284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 02:03:07.653558   67284 out.go:298] Setting JSON to false
	I0729 02:03:07.654944   67284 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6334,"bootTime":1722212254,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 02:03:07.655053   67284 start.go:139] virtualization: kvm guest
	I0729 02:03:07.657343   67284 out.go:177] * [kubernetes-upgrade-211243] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 02:03:07.658680   67284 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 02:03:07.658723   67284 notify.go:220] Checking for updates...
	I0729 02:03:07.661032   67284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 02:03:07.662468   67284 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 02:03:07.663690   67284 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 02:03:07.665007   67284 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 02:03:07.666546   67284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 02:03:07.668384   67284 config.go:182] Loaded profile config "kubernetes-upgrade-211243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 02:03:07.668802   67284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:03:07.668940   67284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:03:07.685860   67284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40859
	I0729 02:03:07.686321   67284 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:03:07.686788   67284 main.go:141] libmachine: Using API Version  1
	I0729 02:03:07.686804   67284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:03:07.687390   67284 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:03:07.687583   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .DriverName
	I0729 02:03:07.687871   67284 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 02:03:07.688280   67284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:03:07.688322   67284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:03:07.708687   67284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39605
	I0729 02:03:07.709477   67284 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:03:07.710200   67284 main.go:141] libmachine: Using API Version  1
	I0729 02:03:07.710234   67284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:03:07.710583   67284 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:03:07.710795   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .DriverName
	I0729 02:03:07.754077   67284 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 02:03:07.755546   67284 start.go:297] selected driver: kvm2
	I0729 02:03:07.755565   67284 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-211243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-211243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.63 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 02:03:07.755708   67284 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 02:03:07.756407   67284 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 02:03:07.756493   67284 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 02:03:07.773624   67284 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 02:03:07.773983   67284 cni.go:84] Creating CNI manager for ""
	I0729 02:03:07.773996   67284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 02:03:07.774041   67284 start.go:340] cluster config:
	{Name:kubernetes-upgrade-211243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-211243 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.63 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 02:03:07.774208   67284 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 02:03:07.776001   67284 out.go:177] * Starting "kubernetes-upgrade-211243" primary control-plane node in "kubernetes-upgrade-211243" cluster
	I0729 02:03:07.777364   67284 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 02:03:07.777433   67284 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 02:03:07.777448   67284 cache.go:56] Caching tarball of preloaded images
	I0729 02:03:07.777535   67284 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 02:03:07.777548   67284 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 02:03:07.777642   67284 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/config.json ...
	I0729 02:03:07.777860   67284 start.go:360] acquireMachinesLock for kubernetes-upgrade-211243: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 02:03:11.432026   67284 start.go:364] duration metric: took 3.654129492s to acquireMachinesLock for "kubernetes-upgrade-211243"
	I0729 02:03:11.432068   67284 start.go:96] Skipping create...Using existing machine configuration
	I0729 02:03:11.432078   67284 fix.go:54] fixHost starting: 
	I0729 02:03:11.432467   67284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:03:11.432521   67284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:03:11.452348   67284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37831
	I0729 02:03:11.452841   67284 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:03:11.453315   67284 main.go:141] libmachine: Using API Version  1
	I0729 02:03:11.453335   67284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:03:11.453657   67284 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:03:11.453810   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .DriverName
	I0729 02:03:11.453985   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetState
	I0729 02:03:11.455579   67284 fix.go:112] recreateIfNeeded on kubernetes-upgrade-211243: state=Running err=<nil>
	W0729 02:03:11.455611   67284 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 02:03:11.457710   67284 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-211243" VM ...
	I0729 02:03:11.458991   67284 machine.go:94] provisionDockerMachine start ...
	I0729 02:03:11.459018   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .DriverName
	I0729 02:03:11.460391   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 02:03:11.463331   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:11.463757   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 03:02:38 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 02:03:11.463785   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:11.464085   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 02:03:11.464279   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 02:03:11.464448   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 02:03:11.464604   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 02:03:11.464785   67284 main.go:141] libmachine: Using SSH client type: native
	I0729 02:03:11.465006   67284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0729 02:03:11.465018   67284 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 02:03:11.577759   67284 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-211243
	
	I0729 02:03:11.577795   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetMachineName
	I0729 02:03:11.578060   67284 buildroot.go:166] provisioning hostname "kubernetes-upgrade-211243"
	I0729 02:03:11.578089   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetMachineName
	I0729 02:03:11.578295   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 02:03:11.581802   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:11.582277   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 03:02:38 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 02:03:11.582331   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:11.582516   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 02:03:11.582737   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 02:03:11.582928   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 02:03:11.583093   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 02:03:11.583271   67284 main.go:141] libmachine: Using SSH client type: native
	I0729 02:03:11.583503   67284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0729 02:03:11.583522   67284 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-211243 && echo "kubernetes-upgrade-211243" | sudo tee /etc/hostname
	I0729 02:03:11.720993   67284 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-211243
	
	I0729 02:03:11.721027   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 02:03:11.724224   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:11.724626   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 03:02:38 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 02:03:11.724674   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:11.724881   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 02:03:11.725097   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 02:03:11.725286   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 02:03:11.725451   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 02:03:11.725609   67284 main.go:141] libmachine: Using SSH client type: native
	I0729 02:03:11.725765   67284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0729 02:03:11.725781   67284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-211243' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-211243/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-211243' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 02:03:11.836859   67284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 02:03:11.836902   67284 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 02:03:11.836920   67284 buildroot.go:174] setting up certificates
	I0729 02:03:11.836928   67284 provision.go:84] configureAuth start
	I0729 02:03:11.836948   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetMachineName
	I0729 02:03:11.837249   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetIP
	I0729 02:03:11.840070   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:11.840444   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 03:02:38 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 02:03:11.840496   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:11.840706   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 02:03:11.843513   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:11.843798   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 03:02:38 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 02:03:11.843815   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:11.844003   67284 provision.go:143] copyHostCerts
	I0729 02:03:11.844079   67284 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 02:03:11.844095   67284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 02:03:11.844163   67284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 02:03:11.844280   67284 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 02:03:11.844290   67284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 02:03:11.844342   67284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 02:03:11.844459   67284 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 02:03:11.844470   67284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 02:03:11.844501   67284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 02:03:11.844584   67284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-211243 san=[127.0.0.1 192.168.61.63 kubernetes-upgrade-211243 localhost minikube]
	I0729 02:03:11.964262   67284 provision.go:177] copyRemoteCerts
	I0729 02:03:11.964350   67284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 02:03:11.964383   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 02:03:11.966991   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:11.967312   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 03:02:38 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 02:03:11.967351   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:11.967560   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 02:03:11.967789   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 02:03:11.967984   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 02:03:11.968145   67284 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243/id_rsa Username:docker}
	I0729 02:03:12.056024   67284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 02:03:12.084402   67284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0729 02:03:12.110102   67284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 02:03:12.138017   67284 provision.go:87] duration metric: took 301.076444ms to configureAuth
	I0729 02:03:12.138045   67284 buildroot.go:189] setting minikube options for container-runtime
	I0729 02:03:12.138249   67284 config.go:182] Loaded profile config "kubernetes-upgrade-211243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 02:03:12.138338   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 02:03:12.141578   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:12.141996   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 03:02:38 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 02:03:12.142031   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:12.142191   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 02:03:12.142385   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 02:03:12.142571   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 02:03:12.142731   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 02:03:12.142898   67284 main.go:141] libmachine: Using SSH client type: native
	I0729 02:03:12.143160   67284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0729 02:03:12.143181   67284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 02:03:21.484842   67284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 02:03:21.484883   67284 machine.go:97] duration metric: took 10.025875691s to provisionDockerMachine
	I0729 02:03:21.484896   67284 start.go:293] postStartSetup for "kubernetes-upgrade-211243" (driver="kvm2")
	I0729 02:03:21.484911   67284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 02:03:21.484931   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .DriverName
	I0729 02:03:21.485362   67284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 02:03:21.485399   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 02:03:21.488562   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:21.488964   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 03:02:38 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 02:03:21.488996   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:21.489274   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 02:03:21.489438   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 02:03:21.489588   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 02:03:21.489694   67284 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243/id_rsa Username:docker}
	I0729 02:03:21.579045   67284 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 02:03:21.583327   67284 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 02:03:21.583351   67284 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 02:03:21.583419   67284 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 02:03:21.583493   67284 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 02:03:21.583590   67284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 02:03:21.592690   67284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 02:03:21.618431   67284 start.go:296] duration metric: took 133.50494ms for postStartSetup
	I0729 02:03:21.618480   67284 fix.go:56] duration metric: took 10.186401377s for fixHost
	I0729 02:03:21.618505   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 02:03:21.621396   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:21.621755   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 03:02:38 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 02:03:21.621779   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:21.621943   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 02:03:21.622154   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 02:03:21.622336   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 02:03:21.622499   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 02:03:21.622702   67284 main.go:141] libmachine: Using SSH client type: native
	I0729 02:03:21.622928   67284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.63 22 <nil> <nil>}
	I0729 02:03:21.622956   67284 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 02:03:21.834216   67284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722218601.834468168
	
	I0729 02:03:21.834242   67284 fix.go:216] guest clock: 1722218601.834468168
	I0729 02:03:21.834251   67284 fix.go:229] Guest: 2024-07-29 02:03:21.834468168 +0000 UTC Remote: 2024-07-29 02:03:21.618485717 +0000 UTC m=+14.021203871 (delta=215.982451ms)
	I0729 02:03:21.834281   67284 fix.go:200] guest clock delta is within tolerance: 215.982451ms
	I0729 02:03:21.834289   67284 start.go:83] releasing machines lock for "kubernetes-upgrade-211243", held for 10.402240095s
	I0729 02:03:21.834315   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .DriverName
	I0729 02:03:21.834600   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetIP
	I0729 02:03:21.837940   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:21.838399   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 03:02:38 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 02:03:21.838431   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:21.838612   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .DriverName
	I0729 02:03:21.839277   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .DriverName
	I0729 02:03:21.839485   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .DriverName
	I0729 02:03:21.839580   67284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 02:03:21.839623   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 02:03:21.839701   67284 ssh_runner.go:195] Run: cat /version.json
	I0729 02:03:21.839725   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHHostname
	I0729 02:03:21.842526   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:21.842830   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 03:02:38 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 02:03:21.842859   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:21.842880   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:21.843107   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 02:03:21.843320   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 02:03:21.843462   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 03:02:38 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 02:03:21.843485   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:03:21.843517   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 02:03:21.843643   67284 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243/id_rsa Username:docker}
	I0729 02:03:21.843827   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHPort
	I0729 02:03:21.843988   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHKeyPath
	I0729 02:03:21.844156   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetSSHUsername
	I0729 02:03:21.844275   67284 sshutil.go:53] new ssh client: &{IP:192.168.61.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/kubernetes-upgrade-211243/id_rsa Username:docker}
	I0729 02:03:22.084396   67284 ssh_runner.go:195] Run: systemctl --version
	I0729 02:03:22.186983   67284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 02:03:22.502176   67284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 02:03:22.525054   67284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 02:03:22.525107   67284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 02:03:22.598505   67284 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 02:03:22.598535   67284 start.go:495] detecting cgroup driver to use...
	I0729 02:03:22.598607   67284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 02:03:22.765349   67284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 02:03:22.799162   67284 docker.go:217] disabling cri-docker service (if available) ...
	I0729 02:03:22.799232   67284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 02:03:22.824961   67284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 02:03:22.977382   67284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 02:03:23.318633   67284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 02:03:23.628131   67284 docker.go:233] disabling docker service ...
	I0729 02:03:23.628196   67284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 02:03:23.656188   67284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 02:03:23.676988   67284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 02:03:23.876350   67284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 02:03:24.076334   67284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 02:03:24.093846   67284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 02:03:24.115154   67284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 02:03:24.115234   67284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:03:24.130893   67284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 02:03:24.130953   67284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:03:24.144784   67284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:03:24.158099   67284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:03:24.172775   67284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 02:03:24.185004   67284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:03:24.199594   67284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:03:24.215118   67284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:03:24.233231   67284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 02:03:24.247016   67284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 02:03:24.261216   67284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 02:03:24.442241   67284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 02:04:55.095887   67284 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.653607771s)
	I0729 02:04:55.095913   67284 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 02:04:55.095984   67284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 02:04:55.101866   67284 start.go:563] Will wait 60s for crictl version
	I0729 02:04:55.101934   67284 ssh_runner.go:195] Run: which crictl
	I0729 02:04:55.106073   67284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 02:04:55.147600   67284 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 02:04:55.147698   67284 ssh_runner.go:195] Run: crio --version
	I0729 02:04:55.179224   67284 ssh_runner.go:195] Run: crio --version
	I0729 02:04:55.283278   67284 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 02:04:55.329447   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) Calling .GetIP
	I0729 02:04:55.332280   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:04:55.332673   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:2d:1a", ip: ""} in network mk-kubernetes-upgrade-211243: {Iface:virbr1 ExpiryTime:2024-07-29 03:02:38 +0000 UTC Type:0 Mac:52:54:00:ce:2d:1a Iaid: IPaddr:192.168.61.63 Prefix:24 Hostname:kubernetes-upgrade-211243 Clientid:01:52:54:00:ce:2d:1a}
	I0729 02:04:55.332702   67284 main.go:141] libmachine: (kubernetes-upgrade-211243) DBG | domain kubernetes-upgrade-211243 has defined IP address 192.168.61.63 and MAC address 52:54:00:ce:2d:1a in network mk-kubernetes-upgrade-211243
	I0729 02:04:55.332958   67284 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 02:04:55.338167   67284 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-211243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-211243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.63 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 02:04:55.338269   67284 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 02:04:55.338314   67284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 02:04:55.385888   67284 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 02:04:55.385910   67284 crio.go:433] Images already preloaded, skipping extraction
	I0729 02:04:55.385955   67284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 02:04:55.426794   67284 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 02:04:55.426816   67284 cache_images.go:84] Images are preloaded, skipping loading
	I0729 02:04:55.426825   67284 kubeadm.go:934] updating node { 192.168.61.63 8443 v1.31.0-beta.0 crio true true} ...
	I0729 02:04:55.426944   67284 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-211243 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-211243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 02:04:55.427021   67284 ssh_runner.go:195] Run: crio config
	I0729 02:04:55.478476   67284 cni.go:84] Creating CNI manager for ""
	I0729 02:04:55.478505   67284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 02:04:55.478520   67284 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 02:04:55.478542   67284 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.63 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-211243 NodeName:kubernetes-upgrade-211243 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs
/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 02:04:55.478712   67284 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-211243"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 02:04:55.478770   67284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 02:04:55.489172   67284 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 02:04:55.489247   67284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 02:04:55.500212   67284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (331 bytes)
	I0729 02:04:55.519284   67284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 02:04:55.538564   67284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2173 bytes)
	I0729 02:04:55.557807   67284 ssh_runner.go:195] Run: grep 192.168.61.63	control-plane.minikube.internal$ /etc/hosts
	I0729 02:04:55.563229   67284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 02:04:55.744194   67284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 02:04:55.758874   67284 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243 for IP: 192.168.61.63
	I0729 02:04:55.758904   67284 certs.go:194] generating shared ca certs ...
	I0729 02:04:55.758926   67284 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 02:04:55.759098   67284 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 02:04:55.759155   67284 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 02:04:55.759170   67284 certs.go:256] generating profile certs ...
	I0729 02:04:55.759271   67284 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/client.key
	I0729 02:04:55.759339   67284 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/apiserver.key.ac452c37
	I0729 02:04:55.759398   67284 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/proxy-client.key
	I0729 02:04:55.759551   67284 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 02:04:55.759588   67284 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 02:04:55.759601   67284 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 02:04:55.759637   67284 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 02:04:55.759665   67284 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 02:04:55.759693   67284 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 02:04:55.759748   67284 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 02:04:55.760585   67284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 02:04:55.791003   67284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 02:04:55.817552   67284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 02:04:55.842364   67284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 02:04:55.867436   67284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 02:04:55.893169   67284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 02:04:55.917336   67284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 02:04:55.941072   67284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kubernetes-upgrade-211243/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 02:04:55.964990   67284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 02:04:55.988761   67284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 02:04:56.013206   67284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 02:04:56.036632   67284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 02:04:56.106980   67284 ssh_runner.go:195] Run: openssl version
	I0729 02:04:56.124989   67284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 02:04:56.179824   67284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 02:04:56.230171   67284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 02:04:56.230246   67284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 02:04:56.293209   67284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 02:04:56.338442   67284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 02:04:56.436033   67284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 02:04:56.454463   67284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 02:04:56.454535   67284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 02:04:56.481851   67284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 02:04:56.531735   67284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 02:04:56.566005   67284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 02:04:56.578491   67284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 02:04:56.578564   67284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 02:04:56.591484   67284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 02:04:56.606732   67284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 02:04:56.613036   67284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 02:04:56.624302   67284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 02:04:56.636043   67284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 02:04:56.652183   67284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 02:04:56.658990   67284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 02:04:56.665122   67284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 02:04:56.681670   67284 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-211243 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-211243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.63 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 02:04:56.681788   67284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 02:04:56.681898   67284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 02:04:56.752122   67284 cri.go:89] found id: "fca7f4260c1f72a384f52885d75fae960036fa33398ab13eb5ce54f7386107d7"
	I0729 02:04:56.752151   67284 cri.go:89] found id: "d8d6db93b2bad1f3c5ca924a91fb4c8ca4dd5e91de898b9f99f397d929ee1f71"
	I0729 02:04:56.752161   67284 cri.go:89] found id: "e8f96cfc8912269a8db8551f95749c38127fa085a98b2941a8a29e4830e38980"
	I0729 02:04:56.752180   67284 cri.go:89] found id: "43d29973be07fd53ca0e28c9415cc8862c70b684033900fc8fb39bf3c4ecbce8"
	I0729 02:04:56.752186   67284 cri.go:89] found id: "97788a70719f8a0b4920bd46ed306c56b61963284bd002ee22371b20a1f1ab34"
	I0729 02:04:56.752190   67284 cri.go:89] found id: "f56e91bfa9e110b0d5cf70ac5636eca36925b95d123beb3eb90d51cd96328f6c"
	I0729 02:04:56.752229   67284 cri.go:89] found id: "0c0323d217491697667786cab90790df02a25cf0d5692ac865526bef20fb5ea3"
	I0729 02:04:56.752242   67284 cri.go:89] found id: "e98f0015418113f170c19c0ba8b13f2072e882d0129e97e7c4df07c4d87fff9c"
	I0729 02:04:56.752246   67284 cri.go:89] found id: "6f2ea98bed71ae745a74eb71682140067bb1da27f7245109e4205a7e98a46009"
	I0729 02:04:56.752255   67284 cri.go:89] found id: "c19b455a1f200272608f118b68f39d9c712d735165c9b2f6f4dadbfd51b01f9c"
	I0729 02:04:56.752260   67284 cri.go:89] found id: "078af8b94159e4ed18356c9f8cd8107cb50515c0b7c438aca85b48b9a494288f"
	I0729 02:04:56.752264   67284 cri.go:89] found id: "dd4c646df7cc88ca02f30376bec3807da42e8a011c592e44268a11b9916d5322"
	I0729 02:04:56.752267   67284 cri.go:89] found id: ""
	I0729 02:04:56.752319   67284 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-211243 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-29 02:17:09.318123963 +0000 UTC m=+5423.884752078
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-211243 -n kubernetes-upgrade-211243
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-211243 -n kubernetes-upgrade-211243: exit status 2 (226.794087ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-211243 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-211243 logs -n 25: (1.105603421s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-464146 sudo cat                              | bridge-464146          | jenkins | v1.33.1 | 29 Jul 24 02:05 UTC | 29 Jul 24 02:05 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p bridge-464146 sudo                                  | bridge-464146          | jenkins | v1.33.1 | 29 Jul 24 02:05 UTC | 29 Jul 24 02:05 UTC |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p bridge-464146 sudo                                  | bridge-464146          | jenkins | v1.33.1 | 29 Jul 24 02:05 UTC |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-464146 sudo                                  | bridge-464146          | jenkins | v1.33.1 | 29 Jul 24 02:05 UTC | 29 Jul 24 02:05 UTC |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p bridge-464146 sudo cat                              | bridge-464146          | jenkins | v1.33.1 | 29 Jul 24 02:05 UTC | 29 Jul 24 02:05 UTC |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-464146 sudo cat                              | bridge-464146          | jenkins | v1.33.1 | 29 Jul 24 02:05 UTC | 29 Jul 24 02:05 UTC |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p bridge-464146 sudo                                  | bridge-464146          | jenkins | v1.33.1 | 29 Jul 24 02:05 UTC | 29 Jul 24 02:05 UTC |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-464146 sudo                                  | bridge-464146          | jenkins | v1.33.1 | 29 Jul 24 02:05 UTC | 29 Jul 24 02:05 UTC |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-464146 sudo                                  | bridge-464146          | jenkins | v1.33.1 | 29 Jul 24 02:05 UTC | 29 Jul 24 02:05 UTC |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-464146 sudo find                             | bridge-464146          | jenkins | v1.33.1 | 29 Jul 24 02:05 UTC | 29 Jul 24 02:05 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p bridge-464146 sudo crio                             | bridge-464146          | jenkins | v1.33.1 | 29 Jul 24 02:05 UTC | 29 Jul 24 02:05 UTC |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p bridge-464146                                       | bridge-464146          | jenkins | v1.33.1 | 29 Jul 24 02:05 UTC | 29 Jul 24 02:05 UTC |
	| start   | -p embed-certs-436055                                  | embed-certs-436055     | jenkins | v1.33.1 | 29 Jul 24 02:05 UTC | 29 Jul 24 02:06 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-436055            | embed-certs-436055     | jenkins | v1.33.1 | 29 Jul 24 02:06 UTC | 29 Jul 24 02:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-436055                                  | embed-certs-436055     | jenkins | v1.33.1 | 29 Jul 24 02:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-944718             | no-preload-944718      | jenkins | v1.33.1 | 29 Jul 24 02:06 UTC | 29 Jul 24 02:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-944718                                   | no-preload-944718      | jenkins | v1.33.1 | 29 Jul 24 02:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-403582        | old-k8s-version-403582 | jenkins | v1.33.1 | 29 Jul 24 02:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-436055                 | embed-certs-436055     | jenkins | v1.33.1 | 29 Jul 24 02:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-436055                                  | embed-certs-436055     | jenkins | v1.33.1 | 29 Jul 24 02:09 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-944718                  | no-preload-944718      | jenkins | v1.33.1 | 29 Jul 24 02:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-944718 --memory=2200                     | no-preload-944718      | jenkins | v1.33.1 | 29 Jul 24 02:09 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-403582                              | old-k8s-version-403582 | jenkins | v1.33.1 | 29 Jul 24 02:10 UTC | 29 Jul 24 02:10 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-403582             | old-k8s-version-403582 | jenkins | v1.33.1 | 29 Jul 24 02:10 UTC | 29 Jul 24 02:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-403582                              | old-k8s-version-403582 | jenkins | v1.33.1 | 29 Jul 24 02:10 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=kvm2                                          |                        |         |         |                     |                     |
	|         | --container-runtime=crio                               |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 02:10:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 02:10:29.116108   74868 out.go:291] Setting OutFile to fd 1 ...
	I0729 02:10:29.116217   74868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 02:10:29.116222   74868 out.go:304] Setting ErrFile to fd 2...
	I0729 02:10:29.116226   74868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 02:10:29.116424   74868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 02:10:29.116964   74868 out.go:298] Setting JSON to false
	I0729 02:10:29.117902   74868 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6775,"bootTime":1722212254,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 02:10:29.117960   74868 start.go:139] virtualization: kvm guest
	I0729 02:10:29.120572   74868 out.go:177] * [old-k8s-version-403582] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 02:10:29.121954   74868 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 02:10:29.121952   74868 notify.go:220] Checking for updates...
	I0729 02:10:29.124530   74868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 02:10:29.125753   74868 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 02:10:29.126997   74868 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 02:10:29.128178   74868 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 02:10:29.129250   74868 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 02:10:29.130711   74868 config.go:182] Loaded profile config "old-k8s-version-403582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 02:10:29.131172   74868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:10:29.131254   74868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:10:29.146162   74868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36875
	I0729 02:10:29.146571   74868 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:10:29.147158   74868 main.go:141] libmachine: Using API Version  1
	I0729 02:10:29.147178   74868 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:10:29.147511   74868 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:10:29.147704   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .DriverName
	I0729 02:10:29.149358   74868 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 02:10:29.150429   74868 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 02:10:29.150714   74868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:10:29.150745   74868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:10:29.165377   74868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I0729 02:10:29.165780   74868 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:10:29.166287   74868 main.go:141] libmachine: Using API Version  1
	I0729 02:10:29.166307   74868 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:10:29.166617   74868 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:10:29.166801   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .DriverName
	I0729 02:10:29.202145   74868 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 02:10:29.203532   74868 start.go:297] selected driver: kvm2
	I0729 02:10:29.203546   74868 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-403582 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-403582 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 02:10:29.203655   74868 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 02:10:29.204341   74868 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 02:10:29.204419   74868 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 02:10:29.219343   74868 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 02:10:29.219726   74868 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 02:10:29.219789   74868 cni.go:84] Creating CNI manager for ""
	I0729 02:10:29.219804   74868 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 02:10:29.219845   74868 start.go:340] cluster config:
	{Name:old-k8s-version-403582 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-403582 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 02:10:29.219943   74868 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 02:10:29.222677   74868 out.go:177] * Starting "old-k8s-version-403582" primary control-plane node in "old-k8s-version-403582" cluster
	I0729 02:10:28.127312   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:10:29.223875   74868 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 02:10:29.223910   74868 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 02:10:29.223933   74868 cache.go:56] Caching tarball of preloaded images
	I0729 02:10:29.224011   74868 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 02:10:29.224020   74868 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 02:10:29.224119   74868 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/old-k8s-version-403582/config.json ...
	I0729 02:10:29.224293   74868 start.go:360] acquireMachinesLock for old-k8s-version-403582: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 02:10:34.207335   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:10:37.279327   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:10:43.359318   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:10:46.431318   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:10:52.511291   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:10:55.583329   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:11:01.663379   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:11:04.735318   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:11:10.815355   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:11:13.887316   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:11:19.967361   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:11:23.039270   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:11:29.119337   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:11:32.191297   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:11:38.271321   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:11:41.343356   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:11:47.423327   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:11:50.499327   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:11:56.575337   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:11:59.647291   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:12:05.727416   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:12:08.799318   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:12:14.879345   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:12:17.951274   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:12:24.031357   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:12:27.103370   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:12:33.183328   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:12:36.255329   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:12:42.335313   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:12:45.407366   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:12:51.487321   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:12:54.559273   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:13:00.639353   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:13:04.886362   67284 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I0729 02:13:04.886472   67284 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 02:13:04.888094   67284 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 02:13:04.888161   67284 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 02:13:04.888245   67284 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 02:13:04.888379   67284 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 02:13:04.888472   67284 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 02:13:04.888535   67284 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 02:13:04.890711   67284 out.go:204]   - Generating certificates and keys ...
	I0729 02:13:04.890779   67284 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 02:13:04.890841   67284 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 02:13:04.890908   67284 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 02:13:04.890960   67284 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 02:13:04.891017   67284 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 02:13:04.891077   67284 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 02:13:04.891184   67284 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 02:13:04.891297   67284 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 02:13:04.891387   67284 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 02:13:04.891470   67284 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 02:13:04.891510   67284 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 02:13:04.891578   67284 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 02:13:04.891652   67284 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 02:13:04.891728   67284 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 02:13:04.891786   67284 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 02:13:04.891842   67284 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 02:13:04.891887   67284 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 02:13:04.891968   67284 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 02:13:04.892038   67284 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 02:13:03.715326   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:13:04.893584   67284 out.go:204]   - Booting up control plane ...
	I0729 02:13:04.893668   67284 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 02:13:04.893733   67284 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 02:13:04.893787   67284 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 02:13:04.893878   67284 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 02:13:04.893967   67284 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 02:13:04.894014   67284 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 02:13:04.894190   67284 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 02:13:04.894288   67284 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 02:13:04.894369   67284 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001132331s
	I0729 02:13:04.894451   67284 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 02:13:04.894522   67284 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.000640531s
	I0729 02:13:04.894532   67284 kubeadm.go:310] 
	I0729 02:13:04.894591   67284 kubeadm.go:310] Unfortunately, an error has occurred:
	I0729 02:13:04.894618   67284 kubeadm.go:310] 	context deadline exceeded
	I0729 02:13:04.894624   67284 kubeadm.go:310] 
	I0729 02:13:04.894655   67284 kubeadm.go:310] This error is likely caused by:
	I0729 02:13:04.894683   67284 kubeadm.go:310] 	- The kubelet is not running
	I0729 02:13:04.894795   67284 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 02:13:04.894814   67284 kubeadm.go:310] 
	I0729 02:13:04.894912   67284 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 02:13:04.894950   67284 kubeadm.go:310] 	- 'systemctl status kubelet'
	I0729 02:13:04.894978   67284 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I0729 02:13:04.894983   67284 kubeadm.go:310] 
	I0729 02:13:04.895076   67284 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 02:13:04.895177   67284 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 02:13:04.895250   67284 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0729 02:13:04.895344   67284 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 02:13:04.895427   67284 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0729 02:13:04.895592   67284 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	W0729 02:13:04.895665   67284 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001132331s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000640531s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0729 02:09:02.188420   10131 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0729 02:09:02.190268   10131 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 02:13:04.895710   67284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 02:13:06.170030   67284 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.274296194s)
	I0729 02:13:06.170126   67284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 02:13:06.184783   67284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 02:13:06.194213   67284 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 02:13:06.194232   67284 kubeadm.go:157] found existing configuration files:
	
	I0729 02:13:06.194271   67284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 02:13:06.203192   67284 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 02:13:06.203238   67284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 02:13:06.213273   67284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 02:13:06.222122   67284 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 02:13:06.222176   67284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 02:13:06.231324   67284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 02:13:06.240023   67284 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 02:13:06.240071   67284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 02:13:06.249942   67284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 02:13:06.258944   67284 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 02:13:06.259000   67284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 02:13:06.268115   67284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 02:13:06.313478   67284 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 02:13:06.313530   67284 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 02:13:06.431267   67284 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 02:13:06.431392   67284 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 02:13:06.431495   67284 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 02:13:06.441132   67284 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 02:13:06.443220   67284 out.go:204]   - Generating certificates and keys ...
	I0729 02:13:06.443323   67284 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 02:13:06.443414   67284 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 02:13:06.443538   67284 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 02:13:06.443605   67284 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 02:13:06.443675   67284 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 02:13:06.443724   67284 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 02:13:06.443842   67284 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 02:13:06.443957   67284 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 02:13:06.444054   67284 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 02:13:06.444142   67284 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 02:13:06.444204   67284 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 02:13:06.444283   67284 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 02:13:06.649349   67284 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 02:13:06.766101   67284 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 02:13:06.946143   67284 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 02:13:07.049717   67284 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 02:13:07.158939   67284 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 02:13:07.159869   67284 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 02:13:07.162591   67284 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 02:13:07.164613   67284 out.go:204]   - Booting up control plane ...
	I0729 02:13:07.164735   67284 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 02:13:07.164830   67284 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 02:13:07.165189   67284 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 02:13:07.185177   67284 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 02:13:07.199767   67284 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 02:13:07.199861   67284 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 02:13:07.329593   67284 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 02:13:07.329700   67284 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 02:13:09.791341   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:13:08.330649   67284 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001179259s
	I0729 02:13:08.330731   67284 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 02:13:12.863344   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:13:18.943301   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:13:22.015334   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:13:28.095312   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:13:31.167325   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:13:37.247318   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:13:40.319305   74243 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.74:22: connect: no route to host
	I0729 02:13:43.322468   74477 start.go:364] duration metric: took 4m12.362162623s to acquireMachinesLock for "no-preload-944718"
	I0729 02:13:43.322525   74477 start.go:96] Skipping create...Using existing machine configuration
	I0729 02:13:43.322533   74477 fix.go:54] fixHost starting: 
	I0729 02:13:43.322890   74477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:13:43.322919   74477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:13:43.338777   74477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33115
	I0729 02:13:43.339206   74477 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:13:43.339623   74477 main.go:141] libmachine: Using API Version  1
	I0729 02:13:43.339644   74477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:13:43.340010   74477 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:13:43.340205   74477 main.go:141] libmachine: (no-preload-944718) Calling .DriverName
	I0729 02:13:43.340360   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetState
	I0729 02:13:43.341994   74477 fix.go:112] recreateIfNeeded on no-preload-944718: state=Stopped err=<nil>
	I0729 02:13:43.342026   74477 main.go:141] libmachine: (no-preload-944718) Calling .DriverName
	W0729 02:13:43.342162   74477 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 02:13:43.343970   74477 out.go:177] * Restarting existing kvm2 VM for "no-preload-944718" ...
	I0729 02:13:43.320228   74243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 02:13:43.320261   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetMachineName
	I0729 02:13:43.320582   74243 buildroot.go:166] provisioning hostname "embed-certs-436055"
	I0729 02:13:43.320609   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetMachineName
	I0729 02:13:43.320805   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHHostname
	I0729 02:13:43.322339   74243 machine.go:97] duration metric: took 4m37.425149338s to provisionDockerMachine
	I0729 02:13:43.322380   74243 fix.go:56] duration metric: took 4m37.446701372s for fixHost
	I0729 02:13:43.322389   74243 start.go:83] releasing machines lock for "embed-certs-436055", held for 4m37.446723745s
	W0729 02:13:43.322417   74243 start.go:714] error starting host: provision: host is not running
	W0729 02:13:43.322521   74243 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 02:13:43.322532   74243 start.go:729] Will try again in 5 seconds ...
	I0729 02:13:43.345249   74477 main.go:141] libmachine: (no-preload-944718) Calling .Start
	I0729 02:13:43.345401   74477 main.go:141] libmachine: (no-preload-944718) Ensuring networks are active...
	I0729 02:13:43.346127   74477 main.go:141] libmachine: (no-preload-944718) Ensuring network default is active
	I0729 02:13:43.346498   74477 main.go:141] libmachine: (no-preload-944718) Ensuring network mk-no-preload-944718 is active
	I0729 02:13:43.346927   74477 main.go:141] libmachine: (no-preload-944718) Getting domain xml...
	I0729 02:13:43.347578   74477 main.go:141] libmachine: (no-preload-944718) Creating domain...
	I0729 02:13:44.555409   74477 main.go:141] libmachine: (no-preload-944718) Waiting to get IP...
	I0729 02:13:44.556270   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:13:44.556694   74477 main.go:141] libmachine: (no-preload-944718) DBG | unable to find current IP address of domain no-preload-944718 in network mk-no-preload-944718
	I0729 02:13:44.556800   74477 main.go:141] libmachine: (no-preload-944718) DBG | I0729 02:13:44.556679   75551 retry.go:31] will retry after 256.250966ms: waiting for machine to come up
	I0729 02:13:44.814275   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:13:44.814852   74477 main.go:141] libmachine: (no-preload-944718) DBG | unable to find current IP address of domain no-preload-944718 in network mk-no-preload-944718
	I0729 02:13:44.814881   74477 main.go:141] libmachine: (no-preload-944718) DBG | I0729 02:13:44.814819   75551 retry.go:31] will retry after 303.549104ms: waiting for machine to come up
	I0729 02:13:45.120477   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:13:45.121093   74477 main.go:141] libmachine: (no-preload-944718) DBG | unable to find current IP address of domain no-preload-944718 in network mk-no-preload-944718
	I0729 02:13:45.121119   74477 main.go:141] libmachine: (no-preload-944718) DBG | I0729 02:13:45.121041   75551 retry.go:31] will retry after 381.159205ms: waiting for machine to come up
	I0729 02:13:45.503703   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:13:45.504228   74477 main.go:141] libmachine: (no-preload-944718) DBG | unable to find current IP address of domain no-preload-944718 in network mk-no-preload-944718
	I0729 02:13:45.504254   74477 main.go:141] libmachine: (no-preload-944718) DBG | I0729 02:13:45.504190   75551 retry.go:31] will retry after 447.835664ms: waiting for machine to come up
	I0729 02:13:48.324984   74243 start.go:360] acquireMachinesLock for embed-certs-436055: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 02:13:45.953742   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:13:45.954471   74477 main.go:141] libmachine: (no-preload-944718) DBG | unable to find current IP address of domain no-preload-944718 in network mk-no-preload-944718
	I0729 02:13:45.954502   74477 main.go:141] libmachine: (no-preload-944718) DBG | I0729 02:13:45.954431   75551 retry.go:31] will retry after 629.064188ms: waiting for machine to come up
	I0729 02:13:46.585223   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:13:46.585709   74477 main.go:141] libmachine: (no-preload-944718) DBG | unable to find current IP address of domain no-preload-944718 in network mk-no-preload-944718
	I0729 02:13:46.585730   74477 main.go:141] libmachine: (no-preload-944718) DBG | I0729 02:13:46.585666   75551 retry.go:31] will retry after 771.017119ms: waiting for machine to come up
	I0729 02:13:47.358822   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:13:47.359369   74477 main.go:141] libmachine: (no-preload-944718) DBG | unable to find current IP address of domain no-preload-944718 in network mk-no-preload-944718
	I0729 02:13:47.359398   74477 main.go:141] libmachine: (no-preload-944718) DBG | I0729 02:13:47.359312   75551 retry.go:31] will retry after 886.556109ms: waiting for machine to come up
	I0729 02:13:48.247530   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:13:48.248015   74477 main.go:141] libmachine: (no-preload-944718) DBG | unable to find current IP address of domain no-preload-944718 in network mk-no-preload-944718
	I0729 02:13:48.248036   74477 main.go:141] libmachine: (no-preload-944718) DBG | I0729 02:13:48.247973   75551 retry.go:31] will retry after 1.093976715s: waiting for machine to come up
	I0729 02:13:49.343656   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:13:49.344207   74477 main.go:141] libmachine: (no-preload-944718) DBG | unable to find current IP address of domain no-preload-944718 in network mk-no-preload-944718
	I0729 02:13:49.344228   74477 main.go:141] libmachine: (no-preload-944718) DBG | I0729 02:13:49.344160   75551 retry.go:31] will retry after 1.375565839s: waiting for machine to come up
	I0729 02:13:50.721757   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:13:50.722212   74477 main.go:141] libmachine: (no-preload-944718) DBG | unable to find current IP address of domain no-preload-944718 in network mk-no-preload-944718
	I0729 02:13:50.722247   74477 main.go:141] libmachine: (no-preload-944718) DBG | I0729 02:13:50.722164   75551 retry.go:31] will retry after 1.615144741s: waiting for machine to come up
	I0729 02:13:52.340070   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:13:52.340528   74477 main.go:141] libmachine: (no-preload-944718) DBG | unable to find current IP address of domain no-preload-944718 in network mk-no-preload-944718
	I0729 02:13:52.340557   74477 main.go:141] libmachine: (no-preload-944718) DBG | I0729 02:13:52.340481   75551 retry.go:31] will retry after 2.711144329s: waiting for machine to come up
	I0729 02:13:55.053764   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:13:55.054260   74477 main.go:141] libmachine: (no-preload-944718) DBG | unable to find current IP address of domain no-preload-944718 in network mk-no-preload-944718
	I0729 02:13:55.054290   74477 main.go:141] libmachine: (no-preload-944718) DBG | I0729 02:13:55.054207   75551 retry.go:31] will retry after 2.719402772s: waiting for machine to come up
	I0729 02:13:57.776992   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:13:57.777411   74477 main.go:141] libmachine: (no-preload-944718) DBG | unable to find current IP address of domain no-preload-944718 in network mk-no-preload-944718
	I0729 02:13:57.777444   74477 main.go:141] libmachine: (no-preload-944718) DBG | I0729 02:13:57.777362   75551 retry.go:31] will retry after 4.489766122s: waiting for machine to come up
	I0729 02:14:03.688429   74868 start.go:364] duration metric: took 3m34.464104242s to acquireMachinesLock for "old-k8s-version-403582"
	I0729 02:14:03.688515   74868 start.go:96] Skipping create...Using existing machine configuration
	I0729 02:14:03.688524   74868 fix.go:54] fixHost starting: 
	I0729 02:14:03.689008   74868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:14:03.689051   74868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:14:03.705696   74868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
	I0729 02:14:03.706173   74868 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:14:03.706625   74868 main.go:141] libmachine: Using API Version  1
	I0729 02:14:03.706652   74868 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:14:03.706960   74868 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:14:03.707141   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .DriverName
	I0729 02:14:03.707272   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetState
	I0729 02:14:03.708674   74868 fix.go:112] recreateIfNeeded on old-k8s-version-403582: state=Stopped err=<nil>
	I0729 02:14:03.708700   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .DriverName
	W0729 02:14:03.708846   74868 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 02:14:03.710706   74868 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-403582" ...
	I0729 02:14:03.712052   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .Start
	I0729 02:14:03.712202   74868 main.go:141] libmachine: (old-k8s-version-403582) Ensuring networks are active...
	I0729 02:14:03.712840   74868 main.go:141] libmachine: (old-k8s-version-403582) Ensuring network default is active
	I0729 02:14:03.713205   74868 main.go:141] libmachine: (old-k8s-version-403582) Ensuring network mk-old-k8s-version-403582 is active
	I0729 02:14:03.713567   74868 main.go:141] libmachine: (old-k8s-version-403582) Getting domain xml...
	I0729 02:14:03.714354   74868 main.go:141] libmachine: (old-k8s-version-403582) Creating domain...
	I0729 02:14:02.271283   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:02.271809   74477 main.go:141] libmachine: (no-preload-944718) Found IP for machine: 192.168.72.62
	I0729 02:14:02.271839   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has current primary IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:02.271850   74477 main.go:141] libmachine: (no-preload-944718) Reserving static IP address...
	I0729 02:14:02.272305   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "no-preload-944718", mac: "52:54:00:5a:08:b0", ip: "192.168.72.62"} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:02.272319   74477 main.go:141] libmachine: (no-preload-944718) Reserved static IP address: 192.168.72.62
	I0729 02:14:02.272331   74477 main.go:141] libmachine: (no-preload-944718) DBG | skip adding static IP to network mk-no-preload-944718 - found existing host DHCP lease matching {name: "no-preload-944718", mac: "52:54:00:5a:08:b0", ip: "192.168.72.62"}
	I0729 02:14:02.272341   74477 main.go:141] libmachine: (no-preload-944718) DBG | Getting to WaitForSSH function...
	I0729 02:14:02.272356   74477 main.go:141] libmachine: (no-preload-944718) Waiting for SSH to be available...
	I0729 02:14:02.274508   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:02.274800   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:02.274829   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:02.275007   74477 main.go:141] libmachine: (no-preload-944718) DBG | Using SSH client type: external
	I0729 02:14:02.275040   74477 main.go:141] libmachine: (no-preload-944718) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/no-preload-944718/id_rsa (-rw-------)
	I0729 02:14:02.275095   74477 main.go:141] libmachine: (no-preload-944718) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-9421/.minikube/machines/no-preload-944718/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 02:14:02.275117   74477 main.go:141] libmachine: (no-preload-944718) DBG | About to run SSH command:
	I0729 02:14:02.275130   74477 main.go:141] libmachine: (no-preload-944718) DBG | exit 0
	I0729 02:14:02.395184   74477 main.go:141] libmachine: (no-preload-944718) DBG | SSH cmd err, output: <nil>: 
	I0729 02:14:02.395562   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetConfigRaw
	I0729 02:14:02.396165   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetIP
	I0729 02:14:02.398653   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:02.399043   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:02.399091   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:02.399355   74477 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/no-preload-944718/config.json ...
	I0729 02:14:02.399543   74477 machine.go:94] provisionDockerMachine start ...
	I0729 02:14:02.399559   74477 main.go:141] libmachine: (no-preload-944718) Calling .DriverName
	I0729 02:14:02.399753   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHHostname
	I0729 02:14:02.401731   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:02.402003   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:02.402031   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:02.402127   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHPort
	I0729 02:14:02.402297   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHKeyPath
	I0729 02:14:02.402437   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHKeyPath
	I0729 02:14:02.402556   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHUsername
	I0729 02:14:02.402696   74477 main.go:141] libmachine: Using SSH client type: native
	I0729 02:14:02.402885   74477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0729 02:14:02.402896   74477 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 02:14:02.503538   74477 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 02:14:02.503573   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetMachineName
	I0729 02:14:02.503822   74477 buildroot.go:166] provisioning hostname "no-preload-944718"
	I0729 02:14:02.503850   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetMachineName
	I0729 02:14:02.504041   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHHostname
	I0729 02:14:02.506493   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:02.506857   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:02.506889   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:02.507089   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHPort
	I0729 02:14:02.507275   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHKeyPath
	I0729 02:14:02.507441   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHKeyPath
	I0729 02:14:02.507574   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHUsername
	I0729 02:14:02.507739   74477 main.go:141] libmachine: Using SSH client type: native
	I0729 02:14:02.507905   74477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0729 02:14:02.507918   74477 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-944718 && echo "no-preload-944718" | sudo tee /etc/hostname
	I0729 02:14:02.621266   74477 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-944718
	
	I0729 02:14:02.621303   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHHostname
	I0729 02:14:02.624065   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:02.624413   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:02.624444   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:02.624600   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHPort
	I0729 02:14:02.624772   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHKeyPath
	I0729 02:14:02.624892   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHKeyPath
	I0729 02:14:02.625009   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHUsername
	I0729 02:14:02.625136   74477 main.go:141] libmachine: Using SSH client type: native
	I0729 02:14:02.625297   74477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0729 02:14:02.625312   74477 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-944718' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-944718/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-944718' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 02:14:02.732474   74477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 02:14:02.732505   74477 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 02:14:02.732524   74477 buildroot.go:174] setting up certificates
	I0729 02:14:02.732531   74477 provision.go:84] configureAuth start
	I0729 02:14:02.732540   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetMachineName
	I0729 02:14:02.732823   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetIP
	I0729 02:14:02.735244   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:02.735630   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:02.735647   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:02.735783   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHHostname
	I0729 02:14:02.737889   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:02.738160   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:02.738185   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:02.738337   74477 provision.go:143] copyHostCerts
	I0729 02:14:02.738401   74477 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 02:14:02.738414   74477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 02:14:02.738481   74477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 02:14:02.738579   74477 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 02:14:02.738587   74477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 02:14:02.738611   74477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 02:14:02.738679   74477 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 02:14:02.738686   74477 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 02:14:02.738706   74477 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 02:14:02.738752   74477 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.no-preload-944718 san=[127.0.0.1 192.168.72.62 localhost minikube no-preload-944718]
	I0729 02:14:03.032126   74477 provision.go:177] copyRemoteCerts
	I0729 02:14:03.032189   74477 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 02:14:03.032214   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHHostname
	I0729 02:14:03.034873   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:03.035175   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:03.035198   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:03.035361   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHPort
	I0729 02:14:03.035529   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHKeyPath
	I0729 02:14:03.035642   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHUsername
	I0729 02:14:03.035833   74477 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/no-preload-944718/id_rsa Username:docker}
	I0729 02:14:03.117706   74477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 02:14:03.142764   74477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 02:14:03.168082   74477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 02:14:03.193349   74477 provision.go:87] duration metric: took 460.805966ms to configureAuth
	I0729 02:14:03.193389   74477 buildroot.go:189] setting minikube options for container-runtime
	I0729 02:14:03.193556   74477 config.go:182] Loaded profile config "no-preload-944718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 02:14:03.193651   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHHostname
	I0729 02:14:03.196309   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:03.196709   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:03.196742   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:03.196928   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHPort
	I0729 02:14:03.197172   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHKeyPath
	I0729 02:14:03.197351   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHKeyPath
	I0729 02:14:03.197489   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHUsername
	I0729 02:14:03.197659   74477 main.go:141] libmachine: Using SSH client type: native
	I0729 02:14:03.197842   74477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0729 02:14:03.197859   74477 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 02:14:03.460419   74477 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 02:14:03.460447   74477 machine.go:97] duration metric: took 1.060893008s to provisionDockerMachine
	I0729 02:14:03.460461   74477 start.go:293] postStartSetup for "no-preload-944718" (driver="kvm2")
	I0729 02:14:03.460478   74477 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 02:14:03.460499   74477 main.go:141] libmachine: (no-preload-944718) Calling .DriverName
	I0729 02:14:03.460806   74477 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 02:14:03.460835   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHHostname
	I0729 02:14:03.463595   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:03.463877   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:03.463907   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:03.464100   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHPort
	I0729 02:14:03.464291   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHKeyPath
	I0729 02:14:03.464443   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHUsername
	I0729 02:14:03.464588   74477 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/no-preload-944718/id_rsa Username:docker}
	I0729 02:14:03.545959   74477 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 02:14:03.550753   74477 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 02:14:03.550782   74477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 02:14:03.550867   74477 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 02:14:03.551019   74477 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 02:14:03.551167   74477 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 02:14:03.560936   74477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 02:14:03.586470   74477 start.go:296] duration metric: took 125.99661ms for postStartSetup
	I0729 02:14:03.586512   74477 fix.go:56] duration metric: took 20.263979546s for fixHost
	I0729 02:14:03.586536   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHHostname
	I0729 02:14:03.588999   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:03.589341   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:03.589369   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:03.589476   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHPort
	I0729 02:14:03.589659   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHKeyPath
	I0729 02:14:03.589797   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHKeyPath
	I0729 02:14:03.589927   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHUsername
	I0729 02:14:03.590070   74477 main.go:141] libmachine: Using SSH client type: native
	I0729 02:14:03.590244   74477 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0729 02:14:03.590258   74477 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 02:14:03.688296   74477 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722219243.663492450
	
	I0729 02:14:03.688317   74477 fix.go:216] guest clock: 1722219243.663492450
	I0729 02:14:03.688324   74477 fix.go:229] Guest: 2024-07-29 02:14:03.66349245 +0000 UTC Remote: 2024-07-29 02:14:03.586517128 +0000 UTC m=+272.764327586 (delta=76.975322ms)
	I0729 02:14:03.688341   74477 fix.go:200] guest clock delta is within tolerance: 76.975322ms
	I0729 02:14:03.688354   74477 start.go:83] releasing machines lock for "no-preload-944718", held for 20.365847912s
	I0729 02:14:03.688381   74477 main.go:141] libmachine: (no-preload-944718) Calling .DriverName
	I0729 02:14:03.688662   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetIP
	I0729 02:14:03.691322   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:03.691636   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:03.691665   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:03.691782   74477 main.go:141] libmachine: (no-preload-944718) Calling .DriverName
	I0729 02:14:03.692296   74477 main.go:141] libmachine: (no-preload-944718) Calling .DriverName
	I0729 02:14:03.692490   74477 main.go:141] libmachine: (no-preload-944718) Calling .DriverName
	I0729 02:14:03.692577   74477 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 02:14:03.692634   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHHostname
	I0729 02:14:03.692777   74477 ssh_runner.go:195] Run: cat /version.json
	I0729 02:14:03.692809   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHHostname
	I0729 02:14:03.695370   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:03.695522   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:03.695701   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:03.695736   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:03.695973   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHPort
	I0729 02:14:03.696011   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:03.696052   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:03.696112   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHPort
	I0729 02:14:03.696210   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHKeyPath
	I0729 02:14:03.696274   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHKeyPath
	I0729 02:14:03.696431   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHUsername
	I0729 02:14:03.696437   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHUsername
	I0729 02:14:03.696554   74477 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/no-preload-944718/id_rsa Username:docker}
	I0729 02:14:03.696581   74477 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/no-preload-944718/id_rsa Username:docker}
	I0729 02:14:03.797154   74477 ssh_runner.go:195] Run: systemctl --version
	I0729 02:14:03.803550   74477 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 02:14:03.945759   74477 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 02:14:03.953834   74477 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 02:14:03.953930   74477 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 02:14:03.972701   74477 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 02:14:03.972730   74477 start.go:495] detecting cgroup driver to use...
	I0729 02:14:03.972786   74477 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 02:14:03.989225   74477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 02:14:04.004466   74477 docker.go:217] disabling cri-docker service (if available) ...
	I0729 02:14:04.004531   74477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 02:14:04.018571   74477 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 02:14:04.033072   74477 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 02:14:04.146976   74477 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 02:14:04.334884   74477 docker.go:233] disabling docker service ...
	I0729 02:14:04.334952   74477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 02:14:04.364882   74477 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 02:14:04.379247   74477 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 02:14:04.519758   74477 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 02:14:04.650631   74477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 02:14:04.668304   74477 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 02:14:04.689824   74477 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 02:14:04.689887   74477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:04.701457   74477 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 02:14:04.701513   74477 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:04.712582   74477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:04.723625   74477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:04.734443   74477 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 02:14:04.745674   74477 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:04.759903   74477 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:04.785920   74477 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:04.798846   74477 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 02:14:04.809245   74477 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 02:14:04.809306   74477 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 02:14:04.823119   74477 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 02:14:04.833223   74477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 02:14:04.955707   74477 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 02:14:05.098831   74477 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 02:14:05.098907   74477 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 02:14:05.105471   74477 start.go:563] Will wait 60s for crictl version
	I0729 02:14:05.105523   74477 ssh_runner.go:195] Run: which crictl
	I0729 02:14:05.109515   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 02:14:05.155178   74477 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 02:14:05.155252   74477 ssh_runner.go:195] Run: crio --version
	I0729 02:14:05.186939   74477 ssh_runner.go:195] Run: crio --version
	I0729 02:14:05.219553   74477 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 02:14:05.220764   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetIP
	I0729 02:14:05.223708   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:05.224152   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:05.224182   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:05.224364   74477 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 02:14:05.229086   74477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 02:14:05.242537   74477 kubeadm.go:883] updating cluster {Name:no-preload-944718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-944718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 02:14:05.242653   74477 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 02:14:05.242690   74477 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 02:14:05.282386   74477 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 02:14:05.282415   74477 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 02:14:05.282492   74477 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 02:14:05.282521   74477 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 02:14:05.282545   74477 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0729 02:14:05.282523   74477 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 02:14:05.282702   74477 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 02:14:05.282491   74477 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 02:14:05.282492   74477 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 02:14:05.282549   74477 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 02:14:05.284034   74477 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 02:14:05.284073   74477 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 02:14:05.284078   74477 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 02:14:05.284170   74477 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 02:14:05.284217   74477 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 02:14:05.284231   74477 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 02:14:05.284235   74477 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 02:14:05.284171   74477 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 02:14:05.435611   74477 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 02:14:05.442721   74477 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 02:14:05.447153   74477 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 02:14:05.481969   74477 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 02:14:05.527191   74477 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 02:14:05.527244   74477 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 02:14:05.527302   74477 ssh_runner.go:195] Run: which crictl
	I0729 02:14:05.527310   74477 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 02:14:05.527343   74477 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 02:14:05.527406   74477 ssh_runner.go:195] Run: which crictl
	I0729 02:14:05.535506   74477 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 02:14:05.535537   74477 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 02:14:05.535581   74477 ssh_runner.go:195] Run: which crictl
	I0729 02:14:05.537093   74477 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 02:14:05.639278   74477 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 02:14:05.642062   74477 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 02:14:05.679895   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 02:14:05.680003   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 02:14:05.680026   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 02:14:05.680098   74477 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 02:14:05.680131   74477 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 02:14:05.680170   74477 ssh_runner.go:195] Run: which crictl
	I0729 02:14:05.740943   74477 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 02:14:05.740991   74477 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 02:14:05.741039   74477 ssh_runner.go:195] Run: which crictl
	I0729 02:14:05.741056   74477 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 02:14:05.741102   74477 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 02:14:05.741150   74477 ssh_runner.go:195] Run: which crictl
	I0729 02:14:05.766251   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 02:14:05.785445   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 02:14:05.789120   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 02:14:05.789125   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 02:14:05.789191   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 02:14:05.789281   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 02:14:04.974802   74868 main.go:141] libmachine: (old-k8s-version-403582) Waiting to get IP...
	I0729 02:14:04.975951   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:04.976494   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | unable to find current IP address of domain old-k8s-version-403582 in network mk-old-k8s-version-403582
	I0729 02:14:04.976556   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | I0729 02:14:04.976464   75691 retry.go:31] will retry after 189.513886ms: waiting for machine to come up
	I0729 02:14:05.167882   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:05.168442   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | unable to find current IP address of domain old-k8s-version-403582 in network mk-old-k8s-version-403582
	I0729 02:14:05.168468   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | I0729 02:14:05.168400   75691 retry.go:31] will retry after 246.286184ms: waiting for machine to come up
	I0729 02:14:05.415819   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:05.416371   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | unable to find current IP address of domain old-k8s-version-403582 in network mk-old-k8s-version-403582
	I0729 02:14:05.416397   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | I0729 02:14:05.416349   75691 retry.go:31] will retry after 306.785586ms: waiting for machine to come up
	I0729 02:14:05.724962   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:05.725512   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | unable to find current IP address of domain old-k8s-version-403582 in network mk-old-k8s-version-403582
	I0729 02:14:05.725541   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | I0729 02:14:05.725441   75691 retry.go:31] will retry after 545.432093ms: waiting for machine to come up
	I0729 02:14:06.272350   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:06.272958   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | unable to find current IP address of domain old-k8s-version-403582 in network mk-old-k8s-version-403582
	I0729 02:14:06.272982   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | I0729 02:14:06.272916   75691 retry.go:31] will retry after 678.216959ms: waiting for machine to come up
	I0729 02:14:06.953300   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:06.953836   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | unable to find current IP address of domain old-k8s-version-403582 in network mk-old-k8s-version-403582
	I0729 02:14:06.953864   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | I0729 02:14:06.953783   75691 retry.go:31] will retry after 692.061163ms: waiting for machine to come up
	I0729 02:14:07.647214   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:07.647754   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | unable to find current IP address of domain old-k8s-version-403582 in network mk-old-k8s-version-403582
	I0729 02:14:07.647780   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | I0729 02:14:07.647713   75691 retry.go:31] will retry after 1.00949319s: waiting for machine to come up
	I0729 02:14:08.658297   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:08.658767   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | unable to find current IP address of domain old-k8s-version-403582 in network mk-old-k8s-version-403582
	I0729 02:14:08.658797   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | I0729 02:14:08.658715   75691 retry.go:31] will retry after 1.355861328s: waiting for machine to come up
	I0729 02:14:05.906033   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 02:14:05.906091   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 02:14:05.906203   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 02:14:05.934816   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 02:14:05.934836   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 02:14:05.934836   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 02:14:06.031313   74477 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 02:14:06.031338   74477 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 02:14:06.031369   74477 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 02:14:06.031431   74477 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 02:14:06.031436   74477 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 02:14:06.031441   74477 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 02:14:06.072089   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 02:14:06.077446   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 02:14:06.077527   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 02:14:06.077555   74477 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 02:14:06.077587   74477 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 02:14:06.077617   74477 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 02:14:06.077625   74477 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 02:14:06.077647   74477 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 02:14:06.126045   74477 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 02:14:06.126170   74477 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 02:14:06.214620   74477 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 02:14:08.737183   74477 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.659528299s)
	I0729 02:14:08.737223   74477 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 02:14:08.737241   74477 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0: (2.659685083s)
	I0729 02:14:08.737289   74477 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 02:14:08.737319   74477 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.61113283s)
	I0729 02:14:08.737250   74477 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 02:14:08.737347   74477 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 02:14:08.737295   74477 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0: (2.65979238s)
	I0729 02:14:08.737384   74477 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 02:14:08.737387   74477 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.522739648s)
	I0729 02:14:08.737396   74477 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 02:14:08.737407   74477 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 02:14:08.737425   74477 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 02:14:08.737455   74477 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 02:14:08.737486   74477 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 02:14:08.737487   74477 ssh_runner.go:195] Run: which crictl
	I0729 02:14:08.742517   74477 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 02:14:08.746063   74477 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 02:14:08.747604   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 02:14:10.732696   74477 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.995266786s)
	I0729 02:14:10.732734   74477 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 02:14:10.732759   74477 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 02:14:10.732770   74477 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.985137586s)
	I0729 02:14:10.732839   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 02:14:10.732847   74477 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 02:14:10.776290   74477 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 02:14:10.016294   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:10.016828   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | unable to find current IP address of domain old-k8s-version-403582 in network mk-old-k8s-version-403582
	I0729 02:14:10.016859   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | I0729 02:14:10.016786   75691 retry.go:31] will retry after 1.427204266s: waiting for machine to come up
	I0729 02:14:11.445207   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:11.445696   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | unable to find current IP address of domain old-k8s-version-403582 in network mk-old-k8s-version-403582
	I0729 02:14:11.445723   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | I0729 02:14:11.445645   75691 retry.go:31] will retry after 1.780013966s: waiting for machine to come up
	I0729 02:14:13.226957   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:13.227543   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | unable to find current IP address of domain old-k8s-version-403582 in network mk-old-k8s-version-403582
	I0729 02:14:13.227570   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | I0729 02:14:13.227501   75691 retry.go:31] will retry after 2.575963338s: waiting for machine to come up
	I0729 02:14:12.205173   74477 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.472300739s)
	I0729 02:14:12.205208   74477 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 02:14:12.205214   74477 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.428897074s)
	I0729 02:14:12.205238   74477 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 02:14:12.205256   74477 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 02:14:12.205301   74477 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 02:14:12.205336   74477 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 02:14:12.210294   74477 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 02:14:15.587242   74477 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.381918374s)
	I0729 02:14:15.587271   74477 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 02:14:15.587297   74477 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 02:14:15.587346   74477 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 02:14:15.805231   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:15.805715   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | unable to find current IP address of domain old-k8s-version-403582 in network mk-old-k8s-version-403582
	I0729 02:14:15.805740   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | I0729 02:14:15.805663   75691 retry.go:31] will retry after 2.269590023s: waiting for machine to come up
	I0729 02:14:18.078028   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:18.078484   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | unable to find current IP address of domain old-k8s-version-403582 in network mk-old-k8s-version-403582
	I0729 02:14:18.078507   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | I0729 02:14:18.078458   75691 retry.go:31] will retry after 2.854962944s: waiting for machine to come up
	I0729 02:14:17.754334   74477 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.166964605s)
	I0729 02:14:17.754366   74477 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 02:14:17.754398   74477 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 02:14:17.754454   74477 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 02:14:19.726596   74477 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.97211829s)
	I0729 02:14:19.726621   74477 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 02:14:19.726641   74477 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 02:14:19.726678   74477 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 02:14:20.378577   74477 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 02:14:20.378640   74477 cache_images.go:123] Successfully loaded all cached images
	I0729 02:14:20.378649   74477 cache_images.go:92] duration metric: took 15.096219429s to LoadCachedImages
	I0729 02:14:20.378668   74477 kubeadm.go:934] updating node { 192.168.72.62 8443 v1.31.0-beta.0 crio true true} ...
	I0729 02:14:20.378796   74477 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-944718 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-944718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 02:14:20.378916   74477 ssh_runner.go:195] Run: crio config
	I0729 02:14:20.425235   74477 cni.go:84] Creating CNI manager for ""
	I0729 02:14:20.425254   74477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 02:14:20.425262   74477 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 02:14:20.425283   74477 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.62 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-944718 NodeName:no-preload-944718 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 02:14:20.425402   74477 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-944718"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 02:14:20.425459   74477 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 02:14:20.436096   74477 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 02:14:20.436158   74477 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 02:14:20.445512   74477 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0729 02:14:20.462348   74477 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 02:14:20.478711   74477 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0729 02:14:20.496155   74477 ssh_runner.go:195] Run: grep 192.168.72.62	control-plane.minikube.internal$ /etc/hosts
	I0729 02:14:20.500019   74477 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 02:14:20.511909   74477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 02:14:20.633345   74477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 02:14:20.651255   74477 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/no-preload-944718 for IP: 192.168.72.62
	I0729 02:14:20.651270   74477 certs.go:194] generating shared ca certs ...
	I0729 02:14:20.651285   74477 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 02:14:20.651419   74477 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 02:14:20.651458   74477 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 02:14:20.651464   74477 certs.go:256] generating profile certs ...
	I0729 02:14:20.651569   74477 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/no-preload-944718/client.key
	I0729 02:14:20.651632   74477 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/no-preload-944718/apiserver.key.c90aa2c0
	I0729 02:14:20.651669   74477 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/no-preload-944718/proxy-client.key
	I0729 02:14:20.651838   74477 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 02:14:20.651876   74477 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 02:14:20.651889   74477 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 02:14:20.651922   74477 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 02:14:20.651949   74477 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 02:14:20.651976   74477 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 02:14:20.652033   74477 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 02:14:20.652735   74477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 02:14:20.685022   74477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 02:14:20.710864   74477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 02:14:20.737386   74477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 02:14:20.773965   74477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/no-preload-944718/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 02:14:20.814083   74477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/no-preload-944718/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 02:14:22.400064   74243 start.go:364] duration metric: took 34.075010891s to acquireMachinesLock for "embed-certs-436055"
	I0729 02:14:22.400125   74243 start.go:96] Skipping create...Using existing machine configuration
	I0729 02:14:22.400136   74243 fix.go:54] fixHost starting: 
	I0729 02:14:22.400535   74243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:14:22.400569   74243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:14:22.420213   74243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34075
	I0729 02:14:22.420699   74243 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:14:22.421194   74243 main.go:141] libmachine: Using API Version  1
	I0729 02:14:22.421214   74243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:14:22.421535   74243 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:14:22.421694   74243 main.go:141] libmachine: (embed-certs-436055) Calling .DriverName
	I0729 02:14:22.421812   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetState
	I0729 02:14:22.423211   74243 fix.go:112] recreateIfNeeded on embed-certs-436055: state=Stopped err=<nil>
	I0729 02:14:22.423233   74243 main.go:141] libmachine: (embed-certs-436055) Calling .DriverName
	W0729 02:14:22.423386   74243 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 02:14:22.425346   74243 out.go:177] * Restarting existing kvm2 VM for "embed-certs-436055" ...
	I0729 02:14:20.934719   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:20.935343   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has current primary IP address 192.168.39.3 and MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:20.935376   74868 main.go:141] libmachine: (old-k8s-version-403582) Found IP for machine: 192.168.39.3
	I0729 02:14:20.935389   74868 main.go:141] libmachine: (old-k8s-version-403582) Reserving static IP address...
	I0729 02:14:20.935761   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | found host DHCP lease matching {name: "old-k8s-version-403582", mac: "52:54:00:7b:76:3a", ip: "192.168.39.3"} in network mk-old-k8s-version-403582: {Iface:virbr3 ExpiryTime:2024-07-29 03:04:24 +0000 UTC Type:0 Mac:52:54:00:7b:76:3a Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-403582 Clientid:01:52:54:00:7b:76:3a}
	I0729 02:14:20.935789   74868 main.go:141] libmachine: (old-k8s-version-403582) Reserved static IP address: 192.168.39.3
	I0729 02:14:20.935804   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | skip adding static IP to network mk-old-k8s-version-403582 - found existing host DHCP lease matching {name: "old-k8s-version-403582", mac: "52:54:00:7b:76:3a", ip: "192.168.39.3"}
	I0729 02:14:20.935818   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | Getting to WaitForSSH function...
	I0729 02:14:20.935840   74868 main.go:141] libmachine: (old-k8s-version-403582) Waiting for SSH to be available...
	I0729 02:14:20.938154   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:20.938469   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:3a", ip: ""} in network mk-old-k8s-version-403582: {Iface:virbr3 ExpiryTime:2024-07-29 03:04:24 +0000 UTC Type:0 Mac:52:54:00:7b:76:3a Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-403582 Clientid:01:52:54:00:7b:76:3a}
	I0729 02:14:20.938626   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined IP address 192.168.39.3 and MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:20.938650   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | Using SSH client type: external
	I0729 02:14:20.938669   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/old-k8s-version-403582/id_rsa (-rw-------)
	I0729 02:14:20.938691   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-9421/.minikube/machines/old-k8s-version-403582/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 02:14:20.938716   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | About to run SSH command:
	I0729 02:14:20.938732   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | exit 0
	I0729 02:14:21.063342   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | SSH cmd err, output: <nil>: 
	I0729 02:14:21.063687   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetConfigRaw
	I0729 02:14:21.064321   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetIP
	I0729 02:14:21.066934   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:21.067363   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:3a", ip: ""} in network mk-old-k8s-version-403582: {Iface:virbr3 ExpiryTime:2024-07-29 03:04:24 +0000 UTC Type:0 Mac:52:54:00:7b:76:3a Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-403582 Clientid:01:52:54:00:7b:76:3a}
	I0729 02:14:21.067392   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined IP address 192.168.39.3 and MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:21.067691   74868 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/old-k8s-version-403582/config.json ...
	I0729 02:14:21.067858   74868 machine.go:94] provisionDockerMachine start ...
	I0729 02:14:21.067873   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .DriverName
	I0729 02:14:21.068088   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHHostname
	I0729 02:14:21.070441   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:21.070819   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:3a", ip: ""} in network mk-old-k8s-version-403582: {Iface:virbr3 ExpiryTime:2024-07-29 03:04:24 +0000 UTC Type:0 Mac:52:54:00:7b:76:3a Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-403582 Clientid:01:52:54:00:7b:76:3a}
	I0729 02:14:21.070845   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined IP address 192.168.39.3 and MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:21.070930   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHPort
	I0729 02:14:21.071105   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHKeyPath
	I0729 02:14:21.071255   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHKeyPath
	I0729 02:14:21.071377   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHUsername
	I0729 02:14:21.071589   74868 main.go:141] libmachine: Using SSH client type: native
	I0729 02:14:21.071838   74868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0729 02:14:21.071850   74868 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 02:14:21.171617   74868 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 02:14:21.171650   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetMachineName
	I0729 02:14:21.171910   74868 buildroot.go:166] provisioning hostname "old-k8s-version-403582"
	I0729 02:14:21.171941   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetMachineName
	I0729 02:14:21.172162   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHHostname
	I0729 02:14:21.175266   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:21.175675   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:3a", ip: ""} in network mk-old-k8s-version-403582: {Iface:virbr3 ExpiryTime:2024-07-29 03:04:24 +0000 UTC Type:0 Mac:52:54:00:7b:76:3a Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-403582 Clientid:01:52:54:00:7b:76:3a}
	I0729 02:14:21.175705   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined IP address 192.168.39.3 and MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:21.175877   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHPort
	I0729 02:14:21.176067   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHKeyPath
	I0729 02:14:21.176221   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHKeyPath
	I0729 02:14:21.176370   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHUsername
	I0729 02:14:21.176545   74868 main.go:141] libmachine: Using SSH client type: native
	I0729 02:14:21.176753   74868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0729 02:14:21.176770   74868 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-403582 && echo "old-k8s-version-403582" | sudo tee /etc/hostname
	I0729 02:14:21.295329   74868 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-403582
	
	I0729 02:14:21.295363   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHHostname
	I0729 02:14:21.298669   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:21.299018   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:3a", ip: ""} in network mk-old-k8s-version-403582: {Iface:virbr3 ExpiryTime:2024-07-29 03:04:24 +0000 UTC Type:0 Mac:52:54:00:7b:76:3a Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-403582 Clientid:01:52:54:00:7b:76:3a}
	I0729 02:14:21.299053   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined IP address 192.168.39.3 and MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:21.299264   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHPort
	I0729 02:14:21.299442   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHKeyPath
	I0729 02:14:21.299610   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHKeyPath
	I0729 02:14:21.299762   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHUsername
	I0729 02:14:21.299938   74868 main.go:141] libmachine: Using SSH client type: native
	I0729 02:14:21.300196   74868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0729 02:14:21.300224   74868 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-403582' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-403582/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-403582' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 02:14:21.410865   74868 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 02:14:21.410898   74868 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 02:14:21.410919   74868 buildroot.go:174] setting up certificates
	I0729 02:14:21.410930   74868 provision.go:84] configureAuth start
	I0729 02:14:21.410939   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetMachineName
	I0729 02:14:21.411244   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetIP
	I0729 02:14:21.414016   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:21.414392   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:3a", ip: ""} in network mk-old-k8s-version-403582: {Iface:virbr3 ExpiryTime:2024-07-29 03:04:24 +0000 UTC Type:0 Mac:52:54:00:7b:76:3a Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-403582 Clientid:01:52:54:00:7b:76:3a}
	I0729 02:14:21.414433   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined IP address 192.168.39.3 and MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:21.414663   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHHostname
	I0729 02:14:21.417251   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:21.417589   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:3a", ip: ""} in network mk-old-k8s-version-403582: {Iface:virbr3 ExpiryTime:2024-07-29 03:04:24 +0000 UTC Type:0 Mac:52:54:00:7b:76:3a Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-403582 Clientid:01:52:54:00:7b:76:3a}
	I0729 02:14:21.417617   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined IP address 192.168.39.3 and MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:21.417767   74868 provision.go:143] copyHostCerts
	I0729 02:14:21.417839   74868 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 02:14:21.417856   74868 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 02:14:21.417920   74868 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 02:14:21.418050   74868 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 02:14:21.418063   74868 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 02:14:21.418096   74868 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 02:14:21.418179   74868 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 02:14:21.418191   74868 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 02:14:21.418219   74868 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 02:14:21.418308   74868 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-403582 san=[127.0.0.1 192.168.39.3 localhost minikube old-k8s-version-403582]
	I0729 02:14:21.718892   74868 provision.go:177] copyRemoteCerts
	I0729 02:14:21.718962   74868 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 02:14:21.718995   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHHostname
	I0729 02:14:21.721874   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:21.722228   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:3a", ip: ""} in network mk-old-k8s-version-403582: {Iface:virbr3 ExpiryTime:2024-07-29 03:04:24 +0000 UTC Type:0 Mac:52:54:00:7b:76:3a Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-403582 Clientid:01:52:54:00:7b:76:3a}
	I0729 02:14:21.722264   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined IP address 192.168.39.3 and MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:21.722394   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHPort
	I0729 02:14:21.722581   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHKeyPath
	I0729 02:14:21.722769   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHUsername
	I0729 02:14:21.722895   74868 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/old-k8s-version-403582/id_rsa Username:docker}
	I0729 02:14:21.805968   74868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 02:14:21.835449   74868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 02:14:21.862639   74868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 02:14:21.889510   74868 provision.go:87] duration metric: took 478.566807ms to configureAuth
	I0729 02:14:21.889541   74868 buildroot.go:189] setting minikube options for container-runtime
	I0729 02:14:21.889744   74868 config.go:182] Loaded profile config "old-k8s-version-403582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 02:14:21.889813   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHHostname
	I0729 02:14:21.892389   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:21.892703   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:3a", ip: ""} in network mk-old-k8s-version-403582: {Iface:virbr3 ExpiryTime:2024-07-29 03:04:24 +0000 UTC Type:0 Mac:52:54:00:7b:76:3a Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-403582 Clientid:01:52:54:00:7b:76:3a}
	I0729 02:14:21.892734   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined IP address 192.168.39.3 and MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:21.892928   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHPort
	I0729 02:14:21.893125   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHKeyPath
	I0729 02:14:21.893349   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHKeyPath
	I0729 02:14:21.893500   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHUsername
	I0729 02:14:21.893666   74868 main.go:141] libmachine: Using SSH client type: native
	I0729 02:14:21.893874   74868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0729 02:14:21.893891   74868 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 02:14:22.165016   74868 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 02:14:22.165044   74868 machine.go:97] duration metric: took 1.097174781s to provisionDockerMachine
	I0729 02:14:22.165057   74868 start.go:293] postStartSetup for "old-k8s-version-403582" (driver="kvm2")
	I0729 02:14:22.165070   74868 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 02:14:22.165104   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .DriverName
	I0729 02:14:22.165428   74868 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 02:14:22.165451   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHHostname
	I0729 02:14:22.168528   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:22.169003   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:3a", ip: ""} in network mk-old-k8s-version-403582: {Iface:virbr3 ExpiryTime:2024-07-29 03:04:24 +0000 UTC Type:0 Mac:52:54:00:7b:76:3a Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-403582 Clientid:01:52:54:00:7b:76:3a}
	I0729 02:14:22.169032   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined IP address 192.168.39.3 and MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:22.169284   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHPort
	I0729 02:14:22.169459   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHKeyPath
	I0729 02:14:22.169613   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHUsername
	I0729 02:14:22.169755   74868 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/old-k8s-version-403582/id_rsa Username:docker}
	I0729 02:14:22.258083   74868 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 02:14:22.262289   74868 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 02:14:22.262311   74868 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 02:14:22.262386   74868 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 02:14:22.262490   74868 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 02:14:22.262604   74868 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 02:14:22.271922   74868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 02:14:22.296289   74868 start.go:296] duration metric: took 131.218921ms for postStartSetup
	I0729 02:14:22.296328   74868 fix.go:56] duration metric: took 18.607803647s for fixHost
	I0729 02:14:22.296354   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHHostname
	I0729 02:14:22.299211   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:22.299565   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:3a", ip: ""} in network mk-old-k8s-version-403582: {Iface:virbr3 ExpiryTime:2024-07-29 03:04:24 +0000 UTC Type:0 Mac:52:54:00:7b:76:3a Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-403582 Clientid:01:52:54:00:7b:76:3a}
	I0729 02:14:22.299592   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined IP address 192.168.39.3 and MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:22.299734   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHPort
	I0729 02:14:22.299945   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHKeyPath
	I0729 02:14:22.300152   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHKeyPath
	I0729 02:14:22.300302   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHUsername
	I0729 02:14:22.300479   74868 main.go:141] libmachine: Using SSH client type: native
	I0729 02:14:22.300660   74868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0729 02:14:22.300674   74868 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 02:14:22.399902   74868 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722219262.371666238
	
	I0729 02:14:22.399928   74868 fix.go:216] guest clock: 1722219262.371666238
	I0729 02:14:22.399935   74868 fix.go:229] Guest: 2024-07-29 02:14:22.371666238 +0000 UTC Remote: 2024-07-29 02:14:22.296336237 +0000 UTC m=+233.215626378 (delta=75.330001ms)
	I0729 02:14:22.399952   74868 fix.go:200] guest clock delta is within tolerance: 75.330001ms
	I0729 02:14:22.399957   74868 start.go:83] releasing machines lock for "old-k8s-version-403582", held for 18.711467309s
	I0729 02:14:22.399983   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .DriverName
	I0729 02:14:22.400254   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetIP
	I0729 02:14:22.403206   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:22.403573   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:3a", ip: ""} in network mk-old-k8s-version-403582: {Iface:virbr3 ExpiryTime:2024-07-29 03:04:24 +0000 UTC Type:0 Mac:52:54:00:7b:76:3a Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-403582 Clientid:01:52:54:00:7b:76:3a}
	I0729 02:14:22.403598   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined IP address 192.168.39.3 and MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:22.403789   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .DriverName
	I0729 02:14:22.404329   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .DriverName
	I0729 02:14:22.404521   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .DriverName
	I0729 02:14:22.404603   74868 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 02:14:22.404676   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHHostname
	I0729 02:14:22.404765   74868 ssh_runner.go:195] Run: cat /version.json
	I0729 02:14:22.404788   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHHostname
	I0729 02:14:22.407160   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:22.407452   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:22.407663   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:3a", ip: ""} in network mk-old-k8s-version-403582: {Iface:virbr3 ExpiryTime:2024-07-29 03:04:24 +0000 UTC Type:0 Mac:52:54:00:7b:76:3a Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-403582 Clientid:01:52:54:00:7b:76:3a}
	I0729 02:14:22.407715   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined IP address 192.168.39.3 and MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:22.407802   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHPort
	I0729 02:14:22.407802   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:3a", ip: ""} in network mk-old-k8s-version-403582: {Iface:virbr3 ExpiryTime:2024-07-29 03:04:24 +0000 UTC Type:0 Mac:52:54:00:7b:76:3a Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-403582 Clientid:01:52:54:00:7b:76:3a}
	I0729 02:14:22.407842   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined IP address 192.168.39.3 and MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:22.407998   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHPort
	I0729 02:14:22.408021   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHKeyPath
	I0729 02:14:22.408164   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHKeyPath
	I0729 02:14:22.408168   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHUsername
	I0729 02:14:22.408355   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetSSHUsername
	I0729 02:14:22.408363   74868 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/old-k8s-version-403582/id_rsa Username:docker}
	I0729 02:14:22.408468   74868 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/old-k8s-version-403582/id_rsa Username:docker}
	I0729 02:14:22.507703   74868 ssh_runner.go:195] Run: systemctl --version
	I0729 02:14:22.514082   74868 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 02:14:22.665934   74868 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 02:14:22.674814   74868 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 02:14:22.674874   74868 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 02:14:22.696206   74868 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 02:14:22.696229   74868 start.go:495] detecting cgroup driver to use...
	I0729 02:14:22.696295   74868 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 02:14:22.716486   74868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 02:14:22.732812   74868 docker.go:217] disabling cri-docker service (if available) ...
	I0729 02:14:22.732889   74868 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 02:14:22.748296   74868 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 02:14:22.765180   74868 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 02:14:22.911240   74868 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 02:14:23.074257   74868 docker.go:233] disabling docker service ...
	I0729 02:14:23.074320   74868 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 02:14:23.092230   74868 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 02:14:23.108071   74868 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 02:14:23.243592   74868 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 02:14:23.385389   74868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 02:14:23.404932   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 02:14:23.427851   74868 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 02:14:23.427915   74868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:23.440456   74868 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 02:14:23.440521   74868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:23.455113   74868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:23.470751   74868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:23.486329   74868 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 02:14:23.498583   74868 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 02:14:23.514946   74868 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 02:14:23.515028   74868 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 02:14:23.534357   74868 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 02:14:23.544962   74868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 02:14:23.696938   74868 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 02:14:23.870781   74868 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 02:14:23.870853   74868 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 02:14:23.877114   74868 start.go:563] Will wait 60s for crictl version
	I0729 02:14:23.877181   74868 ssh_runner.go:195] Run: which crictl
	I0729 02:14:23.881352   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 02:14:23.932971   74868 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 02:14:23.933072   74868 ssh_runner.go:195] Run: crio --version
	I0729 02:14:23.965169   74868 ssh_runner.go:195] Run: crio --version
	I0729 02:14:24.006411   74868 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 02:14:24.007661   74868 main.go:141] libmachine: (old-k8s-version-403582) Calling .GetIP
	I0729 02:14:24.011195   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:24.011596   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:3a", ip: ""} in network mk-old-k8s-version-403582: {Iface:virbr3 ExpiryTime:2024-07-29 03:04:24 +0000 UTC Type:0 Mac:52:54:00:7b:76:3a Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-403582 Clientid:01:52:54:00:7b:76:3a}
	I0729 02:14:24.011627   74868 main.go:141] libmachine: (old-k8s-version-403582) DBG | domain old-k8s-version-403582 has defined IP address 192.168.39.3 and MAC address 52:54:00:7b:76:3a in network mk-old-k8s-version-403582
	I0729 02:14:24.011901   74868 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 02:14:24.017634   74868 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 02:14:24.033298   74868 kubeadm.go:883] updating cluster {Name:old-k8s-version-403582 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-403582 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 02:14:24.033479   74868 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 02:14:24.033542   74868 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 02:14:24.095469   74868 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 02:14:24.095566   74868 ssh_runner.go:195] Run: which lz4
	I0729 02:14:24.100602   74868 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 02:14:24.106755   74868 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 02:14:24.106792   74868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 02:14:22.426620   74243 main.go:141] libmachine: (embed-certs-436055) Calling .Start
	I0729 02:14:22.426802   74243 main.go:141] libmachine: (embed-certs-436055) Ensuring networks are active...
	I0729 02:14:22.427603   74243 main.go:141] libmachine: (embed-certs-436055) Ensuring network default is active
	I0729 02:14:22.427983   74243 main.go:141] libmachine: (embed-certs-436055) Ensuring network mk-embed-certs-436055 is active
	I0729 02:14:22.428375   74243 main.go:141] libmachine: (embed-certs-436055) Getting domain xml...
	I0729 02:14:22.428978   74243 main.go:141] libmachine: (embed-certs-436055) Creating domain...
	I0729 02:14:23.774289   74243 main.go:141] libmachine: (embed-certs-436055) Waiting to get IP...
	I0729 02:14:23.775140   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:23.775681   74243 main.go:141] libmachine: (embed-certs-436055) DBG | unable to find current IP address of domain embed-certs-436055 in network mk-embed-certs-436055
	I0729 02:14:23.775793   74243 main.go:141] libmachine: (embed-certs-436055) DBG | I0729 02:14:23.775676   75839 retry.go:31] will retry after 301.264166ms: waiting for machine to come up
	I0729 02:14:24.078423   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:24.079013   74243 main.go:141] libmachine: (embed-certs-436055) DBG | unable to find current IP address of domain embed-certs-436055 in network mk-embed-certs-436055
	I0729 02:14:24.079043   74243 main.go:141] libmachine: (embed-certs-436055) DBG | I0729 02:14:24.078974   75839 retry.go:31] will retry after 326.420739ms: waiting for machine to come up
	I0729 02:14:24.407561   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:24.408248   74243 main.go:141] libmachine: (embed-certs-436055) DBG | unable to find current IP address of domain embed-certs-436055 in network mk-embed-certs-436055
	I0729 02:14:24.408279   74243 main.go:141] libmachine: (embed-certs-436055) DBG | I0729 02:14:24.408205   75839 retry.go:31] will retry after 486.423426ms: waiting for machine to come up
	I0729 02:14:24.896197   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:24.897048   74243 main.go:141] libmachine: (embed-certs-436055) DBG | unable to find current IP address of domain embed-certs-436055 in network mk-embed-certs-436055
	I0729 02:14:24.897074   74243 main.go:141] libmachine: (embed-certs-436055) DBG | I0729 02:14:24.896955   75839 retry.go:31] will retry after 487.128649ms: waiting for machine to come up
	I0729 02:14:25.385756   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:25.386373   74243 main.go:141] libmachine: (embed-certs-436055) DBG | unable to find current IP address of domain embed-certs-436055 in network mk-embed-certs-436055
	I0729 02:14:25.386399   74243 main.go:141] libmachine: (embed-certs-436055) DBG | I0729 02:14:25.386328   75839 retry.go:31] will retry after 599.387164ms: waiting for machine to come up
	I0729 02:14:20.856708   74477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/no-preload-944718/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 02:14:20.881407   74477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/no-preload-944718/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 02:14:20.905495   74477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 02:14:20.928498   74477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 02:14:20.954794   74477 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 02:14:20.978283   74477 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 02:14:20.994636   74477 ssh_runner.go:195] Run: openssl version
	I0729 02:14:21.000226   74477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 02:14:21.010928   74477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 02:14:21.016011   74477 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 02:14:21.016078   74477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 02:14:21.022045   74477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 02:14:21.032409   74477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 02:14:21.042817   74477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 02:14:21.047535   74477 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 02:14:21.047576   74477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 02:14:21.053183   74477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 02:14:21.064612   74477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 02:14:21.076724   74477 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 02:14:21.081564   74477 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 02:14:21.081617   74477 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 02:14:21.088625   74477 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 02:14:21.099886   74477 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 02:14:21.104577   74477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 02:14:21.110826   74477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 02:14:21.117023   74477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 02:14:21.122860   74477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 02:14:21.128534   74477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 02:14:21.134035   74477 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 02:14:21.139621   74477 kubeadm.go:392] StartCluster: {Name:no-preload-944718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-944718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 02:14:21.139699   74477 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 02:14:21.139754   74477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 02:14:21.183925   74477 cri.go:89] found id: ""
	I0729 02:14:21.183986   74477 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 02:14:21.194791   74477 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 02:14:21.194810   74477 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 02:14:21.194868   74477 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 02:14:21.204883   74477 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 02:14:21.205815   74477 kubeconfig.go:125] found "no-preload-944718" server: "https://192.168.72.62:8443"
	I0729 02:14:21.207791   74477 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 02:14:21.218720   74477 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.62
	I0729 02:14:21.218755   74477 kubeadm.go:1160] stopping kube-system containers ...
	I0729 02:14:21.218769   74477 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 02:14:21.218822   74477 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 02:14:21.256940   74477 cri.go:89] found id: ""
	I0729 02:14:21.257020   74477 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 02:14:21.272686   74477 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 02:14:21.282223   74477 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 02:14:21.282243   74477 kubeadm.go:157] found existing configuration files:
	
	I0729 02:14:21.282295   74477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 02:14:21.291095   74477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 02:14:21.291160   74477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 02:14:21.301273   74477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 02:14:21.310130   74477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 02:14:21.310190   74477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 02:14:21.319438   74477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 02:14:21.330569   74477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 02:14:21.330618   74477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 02:14:21.340617   74477 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 02:14:21.349695   74477 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 02:14:21.349747   74477 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 02:14:21.359930   74477 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 02:14:21.369455   74477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 02:14:21.494208   74477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 02:14:22.496315   74477 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.002075148s)
	I0729 02:14:22.496346   74477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 02:14:22.736215   74477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 02:14:22.823259   74477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 02:14:22.902020   74477 api_server.go:52] waiting for apiserver process to appear ...
	I0729 02:14:22.902109   74477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:23.402259   74477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:23.903134   74477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:23.927167   74477 api_server.go:72] duration metric: took 1.025149674s to wait for apiserver process to appear ...
	I0729 02:14:23.927197   74477 api_server.go:88] waiting for apiserver healthz status ...
	I0729 02:14:23.927219   74477 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0729 02:14:23.927702   74477 api_server.go:269] stopped: https://192.168.72.62:8443/healthz: Get "https://192.168.72.62:8443/healthz": dial tcp 192.168.72.62:8443: connect: connection refused
	I0729 02:14:24.428196   74477 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0729 02:14:27.541807   74477 api_server.go:279] https://192.168.72.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 02:14:27.541863   74477 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 02:14:27.541880   74477 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0729 02:14:27.652273   74477 api_server.go:279] https://192.168.72.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 02:14:27.652311   74477 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 02:14:27.927713   74477 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0729 02:14:27.944531   74477 api_server.go:279] https://192.168.72.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 02:14:27.944564   74477 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 02:14:28.427730   74477 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0729 02:14:28.436789   74477 api_server.go:279] https://192.168.72.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 02:14:28.436828   74477 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 02:14:28.928218   74477 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0729 02:14:28.933557   74477 api_server.go:279] https://192.168.72.62:8443/healthz returned 200:
	ok
	I0729 02:14:28.943482   74477 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 02:14:28.943514   74477 api_server.go:131] duration metric: took 5.016310035s to wait for apiserver health ...
	I0729 02:14:28.943525   74477 cni.go:84] Creating CNI manager for ""
	I0729 02:14:28.943533   74477 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 02:14:29.085440   74477 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 02:14:25.861438   74868 crio.go:462] duration metric: took 1.760867313s to copy over tarball
	I0729 02:14:25.861537   74868 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 02:14:25.986933   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:25.987755   74243 main.go:141] libmachine: (embed-certs-436055) DBG | unable to find current IP address of domain embed-certs-436055 in network mk-embed-certs-436055
	I0729 02:14:25.987784   74243 main.go:141] libmachine: (embed-certs-436055) DBG | I0729 02:14:25.987689   75839 retry.go:31] will retry after 607.493363ms: waiting for machine to come up
	I0729 02:14:26.596454   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:26.596956   74243 main.go:141] libmachine: (embed-certs-436055) DBG | unable to find current IP address of domain embed-certs-436055 in network mk-embed-certs-436055
	I0729 02:14:26.596976   74243 main.go:141] libmachine: (embed-certs-436055) DBG | I0729 02:14:26.596927   75839 retry.go:31] will retry after 1.17643651s: waiting for machine to come up
	I0729 02:14:27.775464   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:27.775922   74243 main.go:141] libmachine: (embed-certs-436055) DBG | unable to find current IP address of domain embed-certs-436055 in network mk-embed-certs-436055
	I0729 02:14:27.776018   74243 main.go:141] libmachine: (embed-certs-436055) DBG | I0729 02:14:27.775880   75839 retry.go:31] will retry after 1.294263104s: waiting for machine to come up
	I0729 02:14:29.072162   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:29.072632   74243 main.go:141] libmachine: (embed-certs-436055) DBG | unable to find current IP address of domain embed-certs-436055 in network mk-embed-certs-436055
	I0729 02:14:29.072654   74243 main.go:141] libmachine: (embed-certs-436055) DBG | I0729 02:14:29.072593   75839 retry.go:31] will retry after 1.638514457s: waiting for machine to come up
	I0729 02:14:30.713230   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:30.713792   74243 main.go:141] libmachine: (embed-certs-436055) DBG | unable to find current IP address of domain embed-certs-436055 in network mk-embed-certs-436055
	I0729 02:14:30.713820   74243 main.go:141] libmachine: (embed-certs-436055) DBG | I0729 02:14:30.713736   75839 retry.go:31] will retry after 1.887273974s: waiting for machine to come up
	I0729 02:14:29.319872   74477 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 02:14:29.334207   74477 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 02:14:29.359217   74477 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 02:14:29.461639   74477 system_pods.go:59] 8 kube-system pods found
	I0729 02:14:29.461682   74477 system_pods.go:61] "coredns-5cfdc65f69-tbfrw" [3bc7b04d-04ab-4814-9eaf-ea3ae51b8a0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 02:14:29.461728   74477 system_pods.go:61] "etcd-no-preload-944718" [25ac0c2d-393b-4ded-8b34-593032b866b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 02:14:29.461750   74477 system_pods.go:61] "kube-apiserver-no-preload-944718" [999596e3-2db0-463a-a5e2-b34c2d85809b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 02:14:29.461765   74477 system_pods.go:61] "kube-controller-manager-no-preload-944718" [ca9fd21d-7f8c-42a5-a883-08c4f61d3247] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 02:14:29.461782   74477 system_pods.go:61] "kube-proxy-f5blp" [9b54842a-f3ee-4bed-9d3d-3d7c128afec2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 02:14:29.461793   74477 system_pods.go:61] "kube-scheduler-no-preload-944718" [27568739-5b88-445c-aca6-f65a2dc6bcf2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 02:14:29.461801   74477 system_pods.go:61] "metrics-server-78fcd8795b-4cpr8" [88e42fed-cb2d-4bb4-9196-2d7282414409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 02:14:29.461815   74477 system_pods.go:61] "storage-provisioner" [b97aff45-4e2d-4e1c-9463-3b496a6dd3b7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 02:14:29.461823   74477 system_pods.go:74] duration metric: took 102.580293ms to wait for pod list to return data ...
	I0729 02:14:29.461835   74477 node_conditions.go:102] verifying NodePressure condition ...
	I0729 02:14:29.853838   74477 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 02:14:29.853881   74477 node_conditions.go:123] node cpu capacity is 2
	I0729 02:14:29.853897   74477 node_conditions.go:105] duration metric: took 392.055621ms to run NodePressure ...
	I0729 02:14:29.853921   74477 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 02:14:31.507084   74477 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.653113637s)
	I0729 02:14:31.507131   74477 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 02:14:31.513806   74477 kubeadm.go:739] kubelet initialised
	I0729 02:14:31.513839   74477 kubeadm.go:740] duration metric: took 6.694314ms waiting for restarted kubelet to initialise ...
	I0729 02:14:31.513850   74477 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 02:14:31.521721   74477 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-tbfrw" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:31.529466   74477 pod_ready.go:97] node "no-preload-944718" hosting pod "coredns-5cfdc65f69-tbfrw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944718" has status "Ready":"False"
	I0729 02:14:31.529500   74477 pod_ready.go:81] duration metric: took 7.748569ms for pod "coredns-5cfdc65f69-tbfrw" in "kube-system" namespace to be "Ready" ...
	E0729 02:14:31.529512   74477 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-944718" hosting pod "coredns-5cfdc65f69-tbfrw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944718" has status "Ready":"False"
	I0729 02:14:31.529522   74477 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-944718" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:31.544917   74477 pod_ready.go:97] node "no-preload-944718" hosting pod "etcd-no-preload-944718" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944718" has status "Ready":"False"
	I0729 02:14:31.544947   74477 pod_ready.go:81] duration metric: took 15.411071ms for pod "etcd-no-preload-944718" in "kube-system" namespace to be "Ready" ...
	E0729 02:14:31.544959   74477 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-944718" hosting pod "etcd-no-preload-944718" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944718" has status "Ready":"False"
	I0729 02:14:31.544967   74477 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-944718" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:31.559653   74477 pod_ready.go:97] node "no-preload-944718" hosting pod "kube-apiserver-no-preload-944718" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944718" has status "Ready":"False"
	I0729 02:14:31.559677   74477 pod_ready.go:81] duration metric: took 14.703431ms for pod "kube-apiserver-no-preload-944718" in "kube-system" namespace to be "Ready" ...
	E0729 02:14:31.559688   74477 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-944718" hosting pod "kube-apiserver-no-preload-944718" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944718" has status "Ready":"False"
	I0729 02:14:31.559699   74477 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-944718" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:31.567990   74477 pod_ready.go:97] node "no-preload-944718" hosting pod "kube-controller-manager-no-preload-944718" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944718" has status "Ready":"False"
	I0729 02:14:31.568020   74477 pod_ready.go:81] duration metric: took 8.31142ms for pod "kube-controller-manager-no-preload-944718" in "kube-system" namespace to be "Ready" ...
	E0729 02:14:31.568031   74477 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-944718" hosting pod "kube-controller-manager-no-preload-944718" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944718" has status "Ready":"False"
	I0729 02:14:31.568041   74477 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f5blp" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:31.911233   74477 pod_ready.go:97] node "no-preload-944718" hosting pod "kube-proxy-f5blp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944718" has status "Ready":"False"
	I0729 02:14:31.911262   74477 pod_ready.go:81] duration metric: took 343.209853ms for pod "kube-proxy-f5blp" in "kube-system" namespace to be "Ready" ...
	E0729 02:14:31.911274   74477 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-944718" hosting pod "kube-proxy-f5blp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944718" has status "Ready":"False"
	I0729 02:14:31.911283   74477 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-944718" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:32.314285   74477 pod_ready.go:97] node "no-preload-944718" hosting pod "kube-scheduler-no-preload-944718" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944718" has status "Ready":"False"
	I0729 02:14:32.314318   74477 pod_ready.go:81] duration metric: took 403.026672ms for pod "kube-scheduler-no-preload-944718" in "kube-system" namespace to be "Ready" ...
	E0729 02:14:32.314331   74477 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-944718" hosting pod "kube-scheduler-no-preload-944718" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944718" has status "Ready":"False"
	I0729 02:14:32.314343   74477 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:32.713990   74477 pod_ready.go:97] node "no-preload-944718" hosting pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944718" has status "Ready":"False"
	I0729 02:14:32.714022   74477 pod_ready.go:81] duration metric: took 399.669806ms for pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace to be "Ready" ...
	E0729 02:14:32.714033   74477 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-944718" hosting pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944718" has status "Ready":"False"
	I0729 02:14:32.714043   74477 pod_ready.go:38] duration metric: took 1.200180218s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 02:14:32.714062   74477 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 02:14:32.731906   74477 ops.go:34] apiserver oom_adj: -16
	I0729 02:14:32.731929   74477 kubeadm.go:597] duration metric: took 11.537112564s to restartPrimaryControlPlane
	I0729 02:14:32.731941   74477 kubeadm.go:394] duration metric: took 11.592323622s to StartCluster
	I0729 02:14:32.731961   74477 settings.go:142] acquiring lock: {Name:mkb5968d4cb7e70e3ab5eb9e0fafacd5f2b8ffad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 02:14:32.732037   74477 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 02:14:32.734473   74477 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/kubeconfig: {Name:mkfc86149281a82bb07035a854bdc5c590b97078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 02:14:32.734794   74477 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 02:14:32.734900   74477 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 02:14:32.734992   74477 addons.go:69] Setting storage-provisioner=true in profile "no-preload-944718"
	I0729 02:14:32.735012   74477 addons.go:69] Setting default-storageclass=true in profile "no-preload-944718"
	I0729 02:14:32.735029   74477 addons.go:69] Setting metrics-server=true in profile "no-preload-944718"
	I0729 02:14:32.735046   74477 addons.go:234] Setting addon metrics-server=true in "no-preload-944718"
	I0729 02:14:32.735038   74477 config.go:182] Loaded profile config "no-preload-944718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0729 02:14:32.735054   74477 addons.go:243] addon metrics-server should already be in state true
	I0729 02:14:32.735051   74477 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-944718"
	I0729 02:14:32.735107   74477 host.go:66] Checking if "no-preload-944718" exists ...
	I0729 02:14:32.735024   74477 addons.go:234] Setting addon storage-provisioner=true in "no-preload-944718"
	W0729 02:14:32.735128   74477 addons.go:243] addon storage-provisioner should already be in state true
	I0729 02:14:32.735166   74477 host.go:66] Checking if "no-preload-944718" exists ...
	I0729 02:14:32.735520   74477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:14:32.735525   74477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:14:32.735556   74477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:14:32.735566   74477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:14:32.735577   74477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:14:32.735655   74477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:14:32.736520   74477 out.go:177] * Verifying Kubernetes components...
	I0729 02:14:32.738386   74477 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 02:14:32.756298   74477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35681
	I0729 02:14:32.756341   74477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0729 02:14:32.756299   74477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41989
	I0729 02:14:32.757026   74477 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:14:32.757095   74477 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:14:32.757151   74477 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:14:32.757670   74477 main.go:141] libmachine: Using API Version  1
	I0729 02:14:32.757682   74477 main.go:141] libmachine: Using API Version  1
	I0729 02:14:32.757698   74477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:14:32.757688   74477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:14:32.757782   74477 main.go:141] libmachine: Using API Version  1
	I0729 02:14:32.757807   74477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:14:32.758096   74477 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:14:32.758134   74477 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:14:32.758264   74477 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:14:32.758629   74477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:14:32.758647   74477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:14:32.759199   74477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:14:32.759226   74477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:14:32.759428   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetState
	I0729 02:14:32.763300   74477 addons.go:234] Setting addon default-storageclass=true in "no-preload-944718"
	W0729 02:14:32.763323   74477 addons.go:243] addon default-storageclass should already be in state true
	I0729 02:14:32.763353   74477 host.go:66] Checking if "no-preload-944718" exists ...
	I0729 02:14:32.763688   74477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:14:32.763709   74477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:14:32.781925   74477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38869
	I0729 02:14:32.781925   74477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41309
	I0729 02:14:32.782699   74477 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:14:32.782765   74477 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:14:32.783421   74477 main.go:141] libmachine: Using API Version  1
	I0729 02:14:32.783440   74477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:14:32.783421   74477 main.go:141] libmachine: Using API Version  1
	I0729 02:14:32.783490   74477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:14:32.783779   74477 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:14:32.784021   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetState
	I0729 02:14:32.784113   74477 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:14:32.784278   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetState
	I0729 02:14:32.784330   74477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42113
	I0729 02:14:32.784707   74477 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:14:32.785293   74477 main.go:141] libmachine: Using API Version  1
	I0729 02:14:32.785316   74477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:14:32.785835   74477 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:14:32.786499   74477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:14:32.786531   74477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:14:32.786706   74477 main.go:141] libmachine: (no-preload-944718) Calling .DriverName
	I0729 02:14:32.786741   74477 main.go:141] libmachine: (no-preload-944718) Calling .DriverName
	I0729 02:14:32.788731   74477 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 02:14:32.788737   74477 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 02:14:29.130299   74868 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.268729245s)
	I0729 02:14:29.130328   74868 crio.go:469] duration metric: took 3.268851106s to extract the tarball
	I0729 02:14:29.130336   74868 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 02:14:29.174238   74868 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 02:14:29.215535   74868 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 02:14:29.215560   74868 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 02:14:29.215636   74868 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 02:14:29.215652   74868 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0729 02:14:29.215676   74868 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 02:14:29.215707   74868 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 02:14:29.215723   74868 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 02:14:29.215727   74868 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 02:14:29.215726   74868 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 02:14:29.215639   74868 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 02:14:29.217161   74868 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 02:14:29.217178   74868 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 02:14:29.217200   74868 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 02:14:29.217201   74868 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 02:14:29.217166   74868 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 02:14:29.217224   74868 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 02:14:29.217264   74868 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 02:14:29.217350   74868 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 02:14:29.356505   74868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 02:14:29.356507   74868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 02:14:29.362473   74868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 02:14:29.366999   74868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 02:14:29.373456   74868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 02:14:29.383086   74868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 02:14:29.391761   74868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 02:14:29.552964   74868 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 02:14:29.553090   74868 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 02:14:29.553133   74868 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 02:14:29.553155   74868 ssh_runner.go:195] Run: which crictl
	I0729 02:14:29.553162   74868 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 02:14:29.553201   74868 ssh_runner.go:195] Run: which crictl
	I0729 02:14:29.553095   74868 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 02:14:29.553230   74868 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 02:14:29.553253   74868 ssh_runner.go:195] Run: which crictl
	I0729 02:14:29.552963   74868 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 02:14:29.552976   74868 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 02:14:29.553290   74868 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 02:14:29.553027   74868 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 02:14:29.553308   74868 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 02:14:29.553323   74868 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 02:14:29.553350   74868 ssh_runner.go:195] Run: which crictl
	I0729 02:14:29.553363   74868 ssh_runner.go:195] Run: which crictl
	I0729 02:14:29.553312   74868 ssh_runner.go:195] Run: which crictl
	I0729 02:14:29.564016   74868 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 02:14:29.564060   74868 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 02:14:29.564107   74868 ssh_runner.go:195] Run: which crictl
	I0729 02:14:29.572975   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 02:14:29.572999   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 02:14:29.573042   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 02:14:29.573074   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 02:14:29.573081   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 02:14:29.573074   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 02:14:29.573132   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 02:14:29.745099   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 02:14:29.745175   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 02:14:29.745291   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 02:14:29.745304   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 02:14:29.745378   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 02:14:29.745400   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 02:14:29.749786   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 02:14:29.866156   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 02:14:29.866182   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 02:14:29.875980   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 02:14:29.938255   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 02:14:29.938325   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 02:14:29.938493   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 02:14:29.939757   74868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 02:14:30.020762   74868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 02:14:30.020812   74868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 02:14:30.026360   74868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 02:14:30.033355   74868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 02:14:30.076620   74868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 02:14:30.076654   74868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 02:14:30.080692   74868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 02:14:30.080723   74868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 02:14:30.226437   74868 cache_images.go:92] duration metric: took 1.010860024s to LoadCachedImages
	W0729 02:14:30.226514   74868 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-9421/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0729 02:14:30.226527   74868 kubeadm.go:934] updating node { 192.168.39.3 8443 v1.20.0 crio true true} ...
	I0729 02:14:30.226664   74868 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-403582 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-403582 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 02:14:30.226756   74868 ssh_runner.go:195] Run: crio config
	I0729 02:14:30.277974   74868 cni.go:84] Creating CNI manager for ""
	I0729 02:14:30.278002   74868 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 02:14:30.278017   74868 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 02:14:30.278044   74868 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-403582 NodeName:old-k8s-version-403582 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 02:14:30.278202   74868 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-403582"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 02:14:30.278275   74868 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 02:14:30.288498   74868 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 02:14:30.288561   74868 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 02:14:30.298627   74868 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0729 02:14:30.318177   74868 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 02:14:30.337343   74868 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0729 02:14:30.356348   74868 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0729 02:14:30.361036   74868 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 02:14:30.374061   74868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 02:14:30.505788   74868 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 02:14:30.523190   74868 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/old-k8s-version-403582 for IP: 192.168.39.3
	I0729 02:14:30.523211   74868 certs.go:194] generating shared ca certs ...
	I0729 02:14:30.523233   74868 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 02:14:30.523399   74868 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 02:14:30.523462   74868 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 02:14:30.523474   74868 certs.go:256] generating profile certs ...
	I0729 02:14:30.523586   74868 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/old-k8s-version-403582/client.key
	I0729 02:14:30.523651   74868 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/old-k8s-version-403582/apiserver.key.bc53dfb9
	I0729 02:14:30.523699   74868 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/old-k8s-version-403582/proxy-client.key
	I0729 02:14:30.523842   74868 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 02:14:30.523884   74868 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 02:14:30.523892   74868 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 02:14:30.523923   74868 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 02:14:30.523956   74868 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 02:14:30.523985   74868 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 02:14:30.524037   74868 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 02:14:30.524876   74868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 02:14:30.555493   74868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 02:14:30.603794   74868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 02:14:30.648845   74868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 02:14:30.684612   74868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/old-k8s-version-403582/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 02:14:30.730576   74868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/old-k8s-version-403582/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 02:14:30.767808   74868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/old-k8s-version-403582/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 02:14:30.793632   74868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/old-k8s-version-403582/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 02:14:30.820342   74868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 02:14:30.852285   74868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 02:14:30.879763   74868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 02:14:30.905928   74868 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 02:14:30.924170   74868 ssh_runner.go:195] Run: openssl version
	I0729 02:14:30.930711   74868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 02:14:30.942174   74868 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 02:14:30.946923   74868 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 02:14:30.946976   74868 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 02:14:30.953234   74868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 02:14:30.964753   74868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 02:14:30.977380   74868 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 02:14:30.982186   74868 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 02:14:30.982239   74868 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 02:14:30.988201   74868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 02:14:31.000521   74868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 02:14:31.013770   74868 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 02:14:31.018791   74868 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 02:14:31.018846   74868 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 02:14:31.025154   74868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 02:14:31.036874   74868 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 02:14:31.041415   74868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 02:14:31.047616   74868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 02:14:31.053802   74868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 02:14:31.060605   74868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 02:14:31.066998   74868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 02:14:31.074047   74868 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 02:14:31.082094   74868 kubeadm.go:392] StartCluster: {Name:old-k8s-version-403582 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-403582 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 02:14:31.082214   74868 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 02:14:31.082283   74868 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 02:14:31.120938   74868 cri.go:89] found id: ""
	I0729 02:14:31.121015   74868 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 02:14:31.131526   74868 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 02:14:31.131547   74868 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 02:14:31.131597   74868 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 02:14:31.141338   74868 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 02:14:31.142858   74868 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-403582" does not appear in /home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 02:14:31.144007   74868 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-9421/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-403582" cluster setting kubeconfig missing "old-k8s-version-403582" context setting]
	I0729 02:14:31.145582   74868 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/kubeconfig: {Name:mkfc86149281a82bb07035a854bdc5c590b97078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 02:14:31.187086   74868 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 02:14:31.198653   74868 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.3
	I0729 02:14:31.198685   74868 kubeadm.go:1160] stopping kube-system containers ...
	I0729 02:14:31.198697   74868 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 02:14:31.198764   74868 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 02:14:31.237043   74868 cri.go:89] found id: ""
	I0729 02:14:31.237111   74868 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 02:14:31.268610   74868 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 02:14:31.280307   74868 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 02:14:31.280327   74868 kubeadm.go:157] found existing configuration files:
	
	I0729 02:14:31.280380   74868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 02:14:31.290720   74868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 02:14:31.290785   74868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 02:14:31.302024   74868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 02:14:31.315770   74868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 02:14:31.315837   74868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 02:14:31.328684   74868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 02:14:31.343515   74868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 02:14:31.343589   74868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 02:14:31.355034   74868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 02:14:31.366194   74868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 02:14:31.366252   74868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 02:14:31.376865   74868 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 02:14:31.387464   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 02:14:31.532385   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 02:14:32.415631   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 02:14:32.678380   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 02:14:32.814627   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 02:14:32.927732   74868 api_server.go:52] waiting for apiserver process to appear ...
	I0729 02:14:32.927817   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:33.428043   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:33.928683   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:32.789997   74477 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 02:14:32.790016   74477 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 02:14:32.790037   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHHostname
	I0729 02:14:32.790051   74477 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 02:14:32.790064   74477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 02:14:32.790075   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHHostname
	I0729 02:14:32.793662   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:32.793934   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:32.794109   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:32.794134   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:32.794489   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHPort
	I0729 02:14:32.794549   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:32.794563   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:32.794740   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHPort
	I0729 02:14:32.794793   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHKeyPath
	I0729 02:14:32.794957   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHUsername
	I0729 02:14:32.794983   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHKeyPath
	I0729 02:14:32.795142   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHUsername
	I0729 02:14:32.795200   74477 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/no-preload-944718/id_rsa Username:docker}
	I0729 02:14:32.795344   74477 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/no-preload-944718/id_rsa Username:docker}
	I0729 02:14:32.828197   74477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I0729 02:14:32.828704   74477 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:14:32.829192   74477 main.go:141] libmachine: Using API Version  1
	I0729 02:14:32.829206   74477 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:14:32.829618   74477 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:14:32.829903   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetState
	I0729 02:14:32.832281   74477 main.go:141] libmachine: (no-preload-944718) Calling .DriverName
	I0729 02:14:32.832550   74477 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 02:14:32.832564   74477 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 02:14:32.832582   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHHostname
	I0729 02:14:32.836163   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:32.836805   74477 main.go:141] libmachine: (no-preload-944718) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:08:b0", ip: ""} in network mk-no-preload-944718: {Iface:virbr4 ExpiryTime:2024-07-29 03:13:54 +0000 UTC Type:0 Mac:52:54:00:5a:08:b0 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-944718 Clientid:01:52:54:00:5a:08:b0}
	I0729 02:14:32.836925   74477 main.go:141] libmachine: (no-preload-944718) DBG | domain no-preload-944718 has defined IP address 192.168.72.62 and MAC address 52:54:00:5a:08:b0 in network mk-no-preload-944718
	I0729 02:14:32.837326   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHPort
	I0729 02:14:32.837676   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHKeyPath
	I0729 02:14:32.837882   74477 main.go:141] libmachine: (no-preload-944718) Calling .GetSSHUsername
	I0729 02:14:32.838048   74477 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/no-preload-944718/id_rsa Username:docker}
	I0729 02:14:32.972820   74477 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 02:14:33.005626   74477 node_ready.go:35] waiting up to 6m0s for node "no-preload-944718" to be "Ready" ...
	I0729 02:14:33.130189   74477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 02:14:33.140223   74477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 02:14:33.149029   74477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 02:14:33.149055   74477 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 02:14:33.171294   74477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 02:14:33.171317   74477 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 02:14:33.210062   74477 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 02:14:33.210089   74477 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 02:14:33.245744   74477 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 02:14:34.224298   74477 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.094064739s)
	I0729 02:14:34.224310   74477 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.084049545s)
	I0729 02:14:34.224348   74477 main.go:141] libmachine: Making call to close driver server
	I0729 02:14:34.224382   74477 main.go:141] libmachine: (no-preload-944718) Calling .Close
	I0729 02:14:34.224383   74477 main.go:141] libmachine: Making call to close driver server
	I0729 02:14:34.224397   74477 main.go:141] libmachine: (no-preload-944718) Calling .Close
	I0729 02:14:34.224835   74477 main.go:141] libmachine: (no-preload-944718) DBG | Closing plugin on server side
	I0729 02:14:34.224873   74477 main.go:141] libmachine: Successfully made call to close driver server
	I0729 02:14:34.224895   74477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 02:14:34.224904   74477 main.go:141] libmachine: Making call to close driver server
	I0729 02:14:34.224911   74477 main.go:141] libmachine: (no-preload-944718) Calling .Close
	I0729 02:14:34.224949   74477 main.go:141] libmachine: (no-preload-944718) DBG | Closing plugin on server side
	I0729 02:14:34.224847   74477 main.go:141] libmachine: Successfully made call to close driver server
	I0729 02:14:34.224976   74477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 02:14:34.224984   74477 main.go:141] libmachine: Making call to close driver server
	I0729 02:14:34.224992   74477 main.go:141] libmachine: (no-preload-944718) Calling .Close
	I0729 02:14:34.226471   74477 main.go:141] libmachine: Successfully made call to close driver server
	I0729 02:14:34.226619   74477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 02:14:34.226547   74477 main.go:141] libmachine: Successfully made call to close driver server
	I0729 02:14:34.226649   74477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 02:14:34.226550   74477 main.go:141] libmachine: (no-preload-944718) DBG | Closing plugin on server side
	I0729 02:14:34.226557   74477 main.go:141] libmachine: (no-preload-944718) DBG | Closing plugin on server side
	I0729 02:14:34.236213   74477 main.go:141] libmachine: Making call to close driver server
	I0729 02:14:34.236240   74477 main.go:141] libmachine: (no-preload-944718) Calling .Close
	I0729 02:14:34.236472   74477 main.go:141] libmachine: Successfully made call to close driver server
	I0729 02:14:34.236491   74477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 02:14:34.325877   74477 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.080086945s)
	I0729 02:14:34.325926   74477 main.go:141] libmachine: Making call to close driver server
	I0729 02:14:34.325947   74477 main.go:141] libmachine: (no-preload-944718) Calling .Close
	I0729 02:14:34.326358   74477 main.go:141] libmachine: (no-preload-944718) DBG | Closing plugin on server side
	I0729 02:14:34.326394   74477 main.go:141] libmachine: Successfully made call to close driver server
	I0729 02:14:34.326407   74477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 02:14:34.326417   74477 main.go:141] libmachine: Making call to close driver server
	I0729 02:14:34.326425   74477 main.go:141] libmachine: (no-preload-944718) Calling .Close
	I0729 02:14:34.326663   74477 main.go:141] libmachine: Successfully made call to close driver server
	I0729 02:14:34.326679   74477 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 02:14:34.326689   74477 addons.go:475] Verifying addon metrics-server=true in "no-preload-944718"
	I0729 02:14:34.328712   74477 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 02:14:32.602806   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:32.603516   74243 main.go:141] libmachine: (embed-certs-436055) DBG | unable to find current IP address of domain embed-certs-436055 in network mk-embed-certs-436055
	I0729 02:14:32.603543   74243 main.go:141] libmachine: (embed-certs-436055) DBG | I0729 02:14:32.603449   75839 retry.go:31] will retry after 1.801654768s: waiting for machine to come up
	I0729 02:14:34.406455   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:34.406989   74243 main.go:141] libmachine: (embed-certs-436055) DBG | unable to find current IP address of domain embed-certs-436055 in network mk-embed-certs-436055
	I0729 02:14:34.407023   74243 main.go:141] libmachine: (embed-certs-436055) DBG | I0729 02:14:34.406935   75839 retry.go:31] will retry after 2.80645337s: waiting for machine to come up
	I0729 02:14:34.330098   74477 addons.go:510] duration metric: took 1.595198971s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 02:14:35.009463   74477 node_ready.go:53] node "no-preload-944718" has status "Ready":"False"
	I0729 02:14:34.428559   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:34.928296   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:35.428293   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:35.928123   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:36.428551   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:36.928121   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:37.428820   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:37.928043   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:38.428138   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:38.928000   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:37.215944   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:37.216325   74243 main.go:141] libmachine: (embed-certs-436055) DBG | unable to find current IP address of domain embed-certs-436055 in network mk-embed-certs-436055
	I0729 02:14:37.216347   74243 main.go:141] libmachine: (embed-certs-436055) DBG | I0729 02:14:37.216278   75839 retry.go:31] will retry after 3.275719528s: waiting for machine to come up
	I0729 02:14:40.494034   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:40.494495   74243 main.go:141] libmachine: (embed-certs-436055) Found IP for machine: 192.168.50.74
	I0729 02:14:40.494508   74243 main.go:141] libmachine: (embed-certs-436055) Reserving static IP address...
	I0729 02:14:40.494523   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has current primary IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:40.494877   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "embed-certs-436055", mac: "52:54:00:00:63:b5", ip: "192.168.50.74"} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:40.494921   74243 main.go:141] libmachine: (embed-certs-436055) DBG | skip adding static IP to network mk-embed-certs-436055 - found existing host DHCP lease matching {name: "embed-certs-436055", mac: "52:54:00:00:63:b5", ip: "192.168.50.74"}
	I0729 02:14:40.494936   74243 main.go:141] libmachine: (embed-certs-436055) Reserved static IP address: 192.168.50.74
	I0729 02:14:40.494946   74243 main.go:141] libmachine: (embed-certs-436055) Waiting for SSH to be available...
	I0729 02:14:40.494958   74243 main.go:141] libmachine: (embed-certs-436055) DBG | Getting to WaitForSSH function...
	I0729 02:14:40.496861   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:40.497204   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:40.497246   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:40.497462   74243 main.go:141] libmachine: (embed-certs-436055) DBG | Using SSH client type: external
	I0729 02:14:40.497490   74243 main.go:141] libmachine: (embed-certs-436055) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/embed-certs-436055/id_rsa (-rw-------)
	I0729 02:14:40.497521   74243 main.go:141] libmachine: (embed-certs-436055) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.74 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-9421/.minikube/machines/embed-certs-436055/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 02:14:40.497535   74243 main.go:141] libmachine: (embed-certs-436055) DBG | About to run SSH command:
	I0729 02:14:40.497548   74243 main.go:141] libmachine: (embed-certs-436055) DBG | exit 0
	I0729 02:14:40.619181   74243 main.go:141] libmachine: (embed-certs-436055) DBG | SSH cmd err, output: <nil>: 
	I0729 02:14:40.619531   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetConfigRaw
	I0729 02:14:40.620195   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetIP
	I0729 02:14:40.622978   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:40.623425   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:40.623446   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:40.623692   74243 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/embed-certs-436055/config.json ...
	I0729 02:14:40.623902   74243 machine.go:94] provisionDockerMachine start ...
	I0729 02:14:40.623923   74243 main.go:141] libmachine: (embed-certs-436055) Calling .DriverName
	I0729 02:14:40.624128   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHHostname
	I0729 02:14:40.626230   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:40.626532   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:40.626576   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:40.626701   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHPort
	I0729 02:14:40.626884   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHKeyPath
	I0729 02:14:40.627033   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHKeyPath
	I0729 02:14:40.627268   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHUsername
	I0729 02:14:40.627426   74243 main.go:141] libmachine: Using SSH client type: native
	I0729 02:14:40.627610   74243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.74 22 <nil> <nil>}
	I0729 02:14:40.627623   74243 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 02:14:40.727374   74243 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 02:14:40.727408   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetMachineName
	I0729 02:14:40.727659   74243 buildroot.go:166] provisioning hostname "embed-certs-436055"
	I0729 02:14:40.727679   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetMachineName
	I0729 02:14:40.727871   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHHostname
	I0729 02:14:40.730219   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:40.730539   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:40.730570   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:40.730697   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHPort
	I0729 02:14:40.730887   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHKeyPath
	I0729 02:14:40.731042   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHKeyPath
	I0729 02:14:40.731180   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHUsername
	I0729 02:14:40.731312   74243 main.go:141] libmachine: Using SSH client type: native
	I0729 02:14:40.731475   74243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.74 22 <nil> <nil>}
	I0729 02:14:40.731492   74243 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-436055 && echo "embed-certs-436055" | sudo tee /etc/hostname
	I0729 02:14:37.510149   74477 node_ready.go:53] node "no-preload-944718" has status "Ready":"False"
	I0729 02:14:38.008729   74477 node_ready.go:49] node "no-preload-944718" has status "Ready":"True"
	I0729 02:14:38.008751   74477 node_ready.go:38] duration metric: took 5.003089619s for node "no-preload-944718" to be "Ready" ...
	I0729 02:14:38.008759   74477 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 02:14:38.016102   74477 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-tbfrw" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:38.020918   74477 pod_ready.go:92] pod "coredns-5cfdc65f69-tbfrw" in "kube-system" namespace has status "Ready":"True"
	I0729 02:14:38.020937   74477 pod_ready.go:81] duration metric: took 4.811561ms for pod "coredns-5cfdc65f69-tbfrw" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:38.020945   74477 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-944718" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:38.025129   74477 pod_ready.go:92] pod "etcd-no-preload-944718" in "kube-system" namespace has status "Ready":"True"
	I0729 02:14:38.025151   74477 pod_ready.go:81] duration metric: took 4.198749ms for pod "etcd-no-preload-944718" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:38.025162   74477 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-944718" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:40.030893   74477 pod_ready.go:102] pod "kube-apiserver-no-preload-944718" in "kube-system" namespace has status "Ready":"False"
	I0729 02:14:40.850175   74243 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-436055
	
	I0729 02:14:40.850209   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHHostname
	I0729 02:14:40.853315   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:40.853733   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:40.853760   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:40.853971   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHPort
	I0729 02:14:40.854149   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHKeyPath
	I0729 02:14:40.854277   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHKeyPath
	I0729 02:14:40.854395   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHUsername
	I0729 02:14:40.854551   74243 main.go:141] libmachine: Using SSH client type: native
	I0729 02:14:40.854761   74243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.74 22 <nil> <nil>}
	I0729 02:14:40.854788   74243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-436055' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-436055/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-436055' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 02:14:40.964410   74243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 02:14:40.964440   74243 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 02:14:40.964477   74243 buildroot.go:174] setting up certificates
	I0729 02:14:40.964489   74243 provision.go:84] configureAuth start
	I0729 02:14:40.964501   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetMachineName
	I0729 02:14:40.964826   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetIP
	I0729 02:14:40.967258   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:40.967704   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:40.967730   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:40.967813   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHHostname
	I0729 02:14:40.970097   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:40.970554   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:40.970581   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:40.970722   74243 provision.go:143] copyHostCerts
	I0729 02:14:40.970789   74243 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 02:14:40.970834   74243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 02:14:40.970910   74243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 02:14:40.971022   74243 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 02:14:40.971034   74243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 02:14:40.971086   74243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 02:14:40.971164   74243 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 02:14:40.971173   74243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 02:14:40.971199   74243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 02:14:40.971259   74243 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.embed-certs-436055 san=[127.0.0.1 192.168.50.74 embed-certs-436055 localhost minikube]
	I0729 02:14:41.074822   74243 provision.go:177] copyRemoteCerts
	I0729 02:14:41.074875   74243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 02:14:41.074898   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHHostname
	I0729 02:14:41.077494   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:41.077792   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:41.077817   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:41.078028   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHPort
	I0729 02:14:41.078165   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHKeyPath
	I0729 02:14:41.078265   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHUsername
	I0729 02:14:41.078384   74243 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/embed-certs-436055/id_rsa Username:docker}
	I0729 02:14:41.163014   74243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 02:14:41.191363   74243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 02:14:41.217469   74243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 02:14:41.240981   74243 provision.go:87] duration metric: took 276.477873ms to configureAuth
	I0729 02:14:41.241008   74243 buildroot.go:189] setting minikube options for container-runtime
	I0729 02:14:41.241216   74243 config.go:182] Loaded profile config "embed-certs-436055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 02:14:41.241283   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHHostname
	I0729 02:14:41.244118   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:41.244396   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:41.244419   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:41.244619   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHPort
	I0729 02:14:41.244803   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHKeyPath
	I0729 02:14:41.244993   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHKeyPath
	I0729 02:14:41.245166   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHUsername
	I0729 02:14:41.245332   74243 main.go:141] libmachine: Using SSH client type: native
	I0729 02:14:41.245513   74243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.74 22 <nil> <nil>}
	I0729 02:14:41.245535   74243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 02:14:41.511041   74243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 02:14:41.511090   74243 machine.go:97] duration metric: took 887.173399ms to provisionDockerMachine
	I0729 02:14:41.511105   74243 start.go:293] postStartSetup for "embed-certs-436055" (driver="kvm2")
	I0729 02:14:41.511117   74243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 02:14:41.511137   74243 main.go:141] libmachine: (embed-certs-436055) Calling .DriverName
	I0729 02:14:41.511450   74243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 02:14:41.511471   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHHostname
	I0729 02:14:41.514380   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:41.514794   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:41.514850   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:41.514962   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHPort
	I0729 02:14:41.515175   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHKeyPath
	I0729 02:14:41.515345   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHUsername
	I0729 02:14:41.515475   74243 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/embed-certs-436055/id_rsa Username:docker}
	I0729 02:14:41.598588   74243 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 02:14:41.602748   74243 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 02:14:41.602766   74243 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 02:14:41.602841   74243 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 02:14:41.602931   74243 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 02:14:41.603037   74243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 02:14:41.612879   74243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 02:14:41.636232   74243 start.go:296] duration metric: took 125.112996ms for postStartSetup
	I0729 02:14:41.636270   74243 fix.go:56] duration metric: took 19.236134116s for fixHost
	I0729 02:14:41.636292   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHHostname
	I0729 02:14:41.638938   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:41.639313   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:41.639342   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:41.639503   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHPort
	I0729 02:14:41.639663   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHKeyPath
	I0729 02:14:41.639809   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHKeyPath
	I0729 02:14:41.639895   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHUsername
	I0729 02:14:41.640047   74243 main.go:141] libmachine: Using SSH client type: native
	I0729 02:14:41.640258   74243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.74 22 <nil> <nil>}
	I0729 02:14:41.640274   74243 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 02:14:41.743907   74243 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722219281.716437869
	
	I0729 02:14:41.743932   74243 fix.go:216] guest clock: 1722219281.716437869
	I0729 02:14:41.743939   74243 fix.go:229] Guest: 2024-07-29 02:14:41.716437869 +0000 UTC Remote: 2024-07-29 02:14:41.636274621 +0000 UTC m=+335.902040325 (delta=80.163248ms)
	I0729 02:14:41.743956   74243 fix.go:200] guest clock delta is within tolerance: 80.163248ms
	I0729 02:14:41.743961   74243 start.go:83] releasing machines lock for "embed-certs-436055", held for 19.343869271s
	I0729 02:14:41.743997   74243 main.go:141] libmachine: (embed-certs-436055) Calling .DriverName
	I0729 02:14:41.744252   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetIP
	I0729 02:14:41.747416   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:41.747907   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:41.747935   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:41.748107   74243 main.go:141] libmachine: (embed-certs-436055) Calling .DriverName
	I0729 02:14:41.748589   74243 main.go:141] libmachine: (embed-certs-436055) Calling .DriverName
	I0729 02:14:41.748770   74243 main.go:141] libmachine: (embed-certs-436055) Calling .DriverName
	I0729 02:14:41.748851   74243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 02:14:41.748915   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHHostname
	I0729 02:14:41.749005   74243 ssh_runner.go:195] Run: cat /version.json
	I0729 02:14:41.749028   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHHostname
	I0729 02:14:41.751451   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:41.751664   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:41.751842   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:41.751882   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:41.751990   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHPort
	I0729 02:14:41.752094   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:41.752121   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:41.752145   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHKeyPath
	I0729 02:14:41.752280   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHPort
	I0729 02:14:41.752338   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHUsername
	I0729 02:14:41.752409   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHKeyPath
	I0729 02:14:41.752495   74243 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/embed-certs-436055/id_rsa Username:docker}
	I0729 02:14:41.752639   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHUsername
	I0729 02:14:41.752775   74243 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/embed-certs-436055/id_rsa Username:docker}
	I0729 02:14:41.848018   74243 ssh_runner.go:195] Run: systemctl --version
	I0729 02:14:41.854075   74243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 02:14:42.001194   74243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 02:14:42.007916   74243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 02:14:42.007971   74243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 02:14:42.025732   74243 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 02:14:42.025754   74243 start.go:495] detecting cgroup driver to use...
	I0729 02:14:42.025815   74243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 02:14:42.049041   74243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 02:14:42.066737   74243 docker.go:217] disabling cri-docker service (if available) ...
	I0729 02:14:42.066813   74243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 02:14:42.083728   74243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 02:14:42.098406   74243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 02:14:42.222820   74243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 02:14:42.375377   74243 docker.go:233] disabling docker service ...
	I0729 02:14:42.375455   74243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 02:14:42.390076   74243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 02:14:42.403042   74243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 02:14:42.541341   74243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 02:14:42.676013   74243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 02:14:42.690249   74243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 02:14:42.708776   74243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 02:14:42.708835   74243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:42.719869   74243 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 02:14:42.719940   74243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:42.730865   74243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:42.742576   74243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:42.753619   74243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 02:14:42.764115   74243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:42.774744   74243 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:42.792158   74243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 02:14:42.803664   74243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 02:14:42.815044   74243 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 02:14:42.815130   74243 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 02:14:42.828859   74243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 02:14:42.840098   74243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 02:14:42.982433   74243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 02:14:43.128888   74243 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 02:14:43.128967   74243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 02:14:43.134308   74243 start.go:563] Will wait 60s for crictl version
	I0729 02:14:43.134361   74243 ssh_runner.go:195] Run: which crictl
	I0729 02:14:43.138666   74243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 02:14:43.189517   74243 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 02:14:43.189624   74243 ssh_runner.go:195] Run: crio --version
	I0729 02:14:43.220217   74243 ssh_runner.go:195] Run: crio --version
	I0729 02:14:43.253878   74243 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 02:14:39.428031   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:39.928296   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:40.428844   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:40.928441   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:41.428591   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:41.928518   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:42.428193   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:42.928317   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:43.428263   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:43.928451   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:43.255117   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetIP
	I0729 02:14:43.257718   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:43.258060   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:43.258085   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:43.258342   74243 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 02:14:43.262600   74243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 02:14:43.277029   74243 kubeadm.go:883] updating cluster {Name:embed-certs-436055 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-436055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 02:14:43.277142   74243 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 02:14:43.277183   74243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 02:14:43.320879   74243 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 02:14:43.320958   74243 ssh_runner.go:195] Run: which lz4
	I0729 02:14:43.325028   74243 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 02:14:43.329248   74243 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 02:14:43.329269   74243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 02:14:44.747651   74243 crio.go:462] duration metric: took 1.422651742s to copy over tarball
	I0729 02:14:44.747717   74243 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 02:14:42.042891   74477 pod_ready.go:102] pod "kube-apiserver-no-preload-944718" in "kube-system" namespace has status "Ready":"False"
	I0729 02:14:42.531765   74477 pod_ready.go:92] pod "kube-apiserver-no-preload-944718" in "kube-system" namespace has status "Ready":"True"
	I0729 02:14:42.531792   74477 pod_ready.go:81] duration metric: took 4.506620929s for pod "kube-apiserver-no-preload-944718" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:42.531804   74477 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-944718" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:42.537047   74477 pod_ready.go:92] pod "kube-controller-manager-no-preload-944718" in "kube-system" namespace has status "Ready":"True"
	I0729 02:14:42.537075   74477 pod_ready.go:81] duration metric: took 5.261096ms for pod "kube-controller-manager-no-preload-944718" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:42.537087   74477 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f5blp" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:43.049449   74477 pod_ready.go:92] pod "kube-proxy-f5blp" in "kube-system" namespace has status "Ready":"True"
	I0729 02:14:43.049479   74477 pod_ready.go:81] duration metric: took 512.382534ms for pod "kube-proxy-f5blp" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:43.049493   74477 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-944718" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:43.054525   74477 pod_ready.go:92] pod "kube-scheduler-no-preload-944718" in "kube-system" namespace has status "Ready":"True"
	I0729 02:14:43.054555   74477 pod_ready.go:81] duration metric: took 5.053558ms for pod "kube-scheduler-no-preload-944718" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:43.054567   74477 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:45.063160   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:14:44.428722   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:44.928309   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:45.428070   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:45.928358   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:46.428158   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:46.928344   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:47.428269   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:47.928239   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:48.428115   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:48.928629   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:46.985625   74243 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.237875607s)
	I0729 02:14:46.985653   74243 crio.go:469] duration metric: took 2.237980338s to extract the tarball
	I0729 02:14:46.985663   74243 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 02:14:47.026105   74243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 02:14:47.072082   74243 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 02:14:47.072106   74243 cache_images.go:84] Images are preloaded, skipping loading
	I0729 02:14:47.072116   74243 kubeadm.go:934] updating node { 192.168.50.74 8443 v1.30.3 crio true true} ...
	I0729 02:14:47.072247   74243 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-436055 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-436055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 02:14:47.072309   74243 ssh_runner.go:195] Run: crio config
	I0729 02:14:47.121184   74243 cni.go:84] Creating CNI manager for ""
	I0729 02:14:47.121213   74243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 02:14:47.121232   74243 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 02:14:47.121259   74243 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.74 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-436055 NodeName:embed-certs-436055 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 02:14:47.121433   74243 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-436055"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 02:14:47.121512   74243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 02:14:47.132629   74243 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 02:14:47.132687   74243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 02:14:47.143855   74243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 02:14:47.162203   74243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 02:14:47.179784   74243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0729 02:14:47.197250   74243 ssh_runner.go:195] Run: grep 192.168.50.74	control-plane.minikube.internal$ /etc/hosts
	I0729 02:14:47.201327   74243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.74	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 02:14:47.215697   74243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 02:14:47.337357   74243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 02:14:47.354374   74243 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/embed-certs-436055 for IP: 192.168.50.74
	I0729 02:14:47.354404   74243 certs.go:194] generating shared ca certs ...
	I0729 02:14:47.354422   74243 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 02:14:47.354613   74243 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 02:14:47.354689   74243 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 02:14:47.354703   74243 certs.go:256] generating profile certs ...
	I0729 02:14:47.354814   74243 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/embed-certs-436055/client.key
	I0729 02:14:47.354900   74243 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/embed-certs-436055/apiserver.key.4cf0c977
	I0729 02:14:47.354952   74243 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/embed-certs-436055/proxy-client.key
	I0729 02:14:47.355126   74243 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 02:14:47.355161   74243 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 02:14:47.355168   74243 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 02:14:47.355193   74243 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 02:14:47.355225   74243 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 02:14:47.355252   74243 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 02:14:47.355312   74243 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 02:14:47.356069   74243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 02:14:47.388166   74243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 02:14:47.422390   74243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 02:14:47.460642   74243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 02:14:47.497104   74243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/embed-certs-436055/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 02:14:47.526462   74243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/embed-certs-436055/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 02:14:47.570421   74243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/embed-certs-436055/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 02:14:47.596535   74243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/embed-certs-436055/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 02:14:47.622530   74243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 02:14:47.646860   74243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 02:14:47.671617   74243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 02:14:47.695713   74243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 02:14:47.712424   74243 ssh_runner.go:195] Run: openssl version
	I0729 02:14:47.718293   74243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 02:14:47.729942   74243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 02:14:47.734979   74243 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 02:14:47.735038   74243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 02:14:47.740986   74243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 02:14:47.751913   74243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 02:14:47.763019   74243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 02:14:47.767753   74243 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 02:14:47.767815   74243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 02:14:47.773604   74243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 02:14:47.785548   74243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 02:14:47.796961   74243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 02:14:47.802238   74243 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 02:14:47.802301   74243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 02:14:47.808372   74243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 02:14:47.820598   74243 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 02:14:47.825250   74243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 02:14:47.831217   74243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 02:14:47.837897   74243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 02:14:47.844522   74243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 02:14:47.851076   74243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 02:14:47.857550   74243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 02:14:47.863490   74243 kubeadm.go:392] StartCluster: {Name:embed-certs-436055 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-436055 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 02:14:47.863592   74243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 02:14:47.863657   74243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 02:14:47.905920   74243 cri.go:89] found id: ""
	I0729 02:14:47.905995   74243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 02:14:47.916604   74243 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 02:14:47.916627   74243 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 02:14:47.916679   74243 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 02:14:47.926730   74243 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 02:14:47.927767   74243 kubeconfig.go:125] found "embed-certs-436055" server: "https://192.168.50.74:8443"
	I0729 02:14:47.929999   74243 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 02:14:47.939962   74243 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.74
	I0729 02:14:47.939987   74243 kubeadm.go:1160] stopping kube-system containers ...
	I0729 02:14:47.939997   74243 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 02:14:47.940043   74243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 02:14:47.978915   74243 cri.go:89] found id: ""
	I0729 02:14:47.978980   74243 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 02:14:47.996973   74243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 02:14:48.008314   74243 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 02:14:48.008334   74243 kubeadm.go:157] found existing configuration files:
	
	I0729 02:14:48.008391   74243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 02:14:48.018595   74243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 02:14:48.018658   74243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 02:14:48.028394   74243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 02:14:48.037593   74243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 02:14:48.037657   74243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 02:14:48.047162   74243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 02:14:48.057049   74243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 02:14:48.057112   74243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 02:14:48.066755   74243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 02:14:48.075464   74243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 02:14:48.075529   74243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 02:14:48.084824   74243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 02:14:48.101089   74243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 02:14:48.222471   74243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 02:14:49.171791   74243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 02:14:49.378386   74243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 02:14:49.442577   74243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 02:14:49.518827   74243 api_server.go:52] waiting for apiserver process to appear ...
	I0729 02:14:49.518932   74243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:50.019868   74243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:50.519125   74243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:50.551974   74243 api_server.go:72] duration metric: took 1.033146527s to wait for apiserver process to appear ...
	I0729 02:14:50.552005   74243 api_server.go:88] waiting for apiserver healthz status ...
	I0729 02:14:50.552026   74243 api_server.go:253] Checking apiserver healthz at https://192.168.50.74:8443/healthz ...
	I0729 02:14:50.552587   74243 api_server.go:269] stopped: https://192.168.50.74:8443/healthz: Get "https://192.168.50.74:8443/healthz": dial tcp 192.168.50.74:8443: connect: connection refused
	I0729 02:14:47.564818   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:14:49.809188   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:14:49.427900   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:49.928470   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:50.427890   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:50.927957   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:51.428438   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:51.928871   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:52.428317   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:52.928425   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:53.428097   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:53.928625   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:51.052520   74243 api_server.go:253] Checking apiserver healthz at https://192.168.50.74:8443/healthz ...
	I0729 02:14:53.937449   74243 api_server.go:279] https://192.168.50.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 02:14:53.937481   74243 api_server.go:103] status: https://192.168.50.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 02:14:53.937494   74243 api_server.go:253] Checking apiserver healthz at https://192.168.50.74:8443/healthz ...
	I0729 02:14:54.004460   74243 api_server.go:279] https://192.168.50.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 02:14:54.004491   74243 api_server.go:103] status: https://192.168.50.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 02:14:54.052659   74243 api_server.go:253] Checking apiserver healthz at https://192.168.50.74:8443/healthz ...
	I0729 02:14:54.058138   74243 api_server.go:279] https://192.168.50.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 02:14:54.058180   74243 api_server.go:103] status: https://192.168.50.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 02:14:54.552749   74243 api_server.go:253] Checking apiserver healthz at https://192.168.50.74:8443/healthz ...
	I0729 02:14:54.562493   74243 api_server.go:279] https://192.168.50.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 02:14:54.562523   74243 api_server.go:103] status: https://192.168.50.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 02:14:55.052104   74243 api_server.go:253] Checking apiserver healthz at https://192.168.50.74:8443/healthz ...
	I0729 02:14:55.063612   74243 api_server.go:279] https://192.168.50.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 02:14:55.063661   74243 api_server.go:103] status: https://192.168.50.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 02:14:55.552168   74243 api_server.go:253] Checking apiserver healthz at https://192.168.50.74:8443/healthz ...
	I0729 02:14:55.556620   74243 api_server.go:279] https://192.168.50.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 02:14:55.556655   74243 api_server.go:103] status: https://192.168.50.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 02:14:52.061115   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:14:54.061980   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:14:56.052805   74243 api_server.go:253] Checking apiserver healthz at https://192.168.50.74:8443/healthz ...
	I0729 02:14:56.057273   74243 api_server.go:279] https://192.168.50.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 02:14:56.057317   74243 api_server.go:103] status: https://192.168.50.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 02:14:56.553035   74243 api_server.go:253] Checking apiserver healthz at https://192.168.50.74:8443/healthz ...
	I0729 02:14:56.557723   74243 api_server.go:279] https://192.168.50.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 02:14:56.557758   74243 api_server.go:103] status: https://192.168.50.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 02:14:57.052260   74243 api_server.go:253] Checking apiserver healthz at https://192.168.50.74:8443/healthz ...
	I0729 02:14:57.056662   74243 api_server.go:279] https://192.168.50.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 02:14:57.056696   74243 api_server.go:103] status: https://192.168.50.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 02:14:57.552446   74243 api_server.go:253] Checking apiserver healthz at https://192.168.50.74:8443/healthz ...
	I0729 02:14:57.556836   74243 api_server.go:279] https://192.168.50.74:8443/healthz returned 200:
	ok
	I0729 02:14:57.563404   74243 api_server.go:141] control plane version: v1.30.3
	I0729 02:14:57.563428   74243 api_server.go:131] duration metric: took 7.011416325s to wait for apiserver health ...
	I0729 02:14:57.563437   74243 cni.go:84] Creating CNI manager for ""
	I0729 02:14:57.563444   74243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 02:14:57.565461   74243 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 02:14:54.428830   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:54.928843   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:55.428424   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:55.928612   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:56.428173   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:56.928817   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:57.428283   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:57.928706   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:58.428559   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:58.928027   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:57.566859   74243 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 02:14:57.578370   74243 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 02:14:57.598103   74243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 02:14:57.607547   74243 system_pods.go:59] 8 kube-system pods found
	I0729 02:14:57.607576   74243 system_pods.go:61] "coredns-7db6d8ff4d-qc8gd" [b5b0c1d5-73f9-4c82-be37-963b4b2ade46] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 02:14:57.607583   74243 system_pods.go:61] "etcd-embed-certs-436055" [01fcf543-a5df-4dfe-9bc7-d299bb3c29da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 02:14:57.607589   74243 system_pods.go:61] "kube-apiserver-embed-certs-436055" [4da4d79e-6b19-4e4e-9753-28d789a483a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 02:14:57.607600   74243 system_pods.go:61] "kube-controller-manager-embed-certs-436055" [3ada0e2c-4bfb-45ab-8f2f-c1765b8099e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 02:14:57.607604   74243 system_pods.go:61] "kube-proxy-24b8w" [952f3a97-56ca-42ad-9d32-4c06582c23bb] Running
	I0729 02:14:57.607608   74243 system_pods.go:61] "kube-scheduler-embed-certs-436055" [dc997ac2-6d96-4b70-99e2-2aef79c13e09] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 02:14:57.607612   74243 system_pods.go:61] "metrics-server-569cc877fc-m9nnh" [48a30c74-efde-4e3e-ba8a-697a7c40cc64] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 02:14:57.607621   74243 system_pods.go:61] "storage-provisioner" [7335e915-eba3-440f-a462-8e3f444eacb4] Running
	I0729 02:14:57.607627   74243 system_pods.go:74] duration metric: took 9.501123ms to wait for pod list to return data ...
	I0729 02:14:57.607634   74243 node_conditions.go:102] verifying NodePressure condition ...
	I0729 02:14:57.610845   74243 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 02:14:57.610865   74243 node_conditions.go:123] node cpu capacity is 2
	I0729 02:14:57.610875   74243 node_conditions.go:105] duration metric: took 3.236538ms to run NodePressure ...
	I0729 02:14:57.610888   74243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 02:14:57.880867   74243 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 02:14:57.885048   74243 kubeadm.go:739] kubelet initialised
	I0729 02:14:57.885068   74243 kubeadm.go:740] duration metric: took 4.17535ms waiting for restarted kubelet to initialise ...
	I0729 02:14:57.885075   74243 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 02:14:57.889887   74243 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-qc8gd" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:57.894822   74243 pod_ready.go:97] node "embed-certs-436055" hosting pod "coredns-7db6d8ff4d-qc8gd" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436055" has status "Ready":"False"
	I0729 02:14:57.894843   74243 pod_ready.go:81] duration metric: took 4.933932ms for pod "coredns-7db6d8ff4d-qc8gd" in "kube-system" namespace to be "Ready" ...
	E0729 02:14:57.894851   74243 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436055" hosting pod "coredns-7db6d8ff4d-qc8gd" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436055" has status "Ready":"False"
	I0729 02:14:57.894857   74243 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-436055" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:57.898632   74243 pod_ready.go:97] node "embed-certs-436055" hosting pod "etcd-embed-certs-436055" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436055" has status "Ready":"False"
	I0729 02:14:57.898649   74243 pod_ready.go:81] duration metric: took 3.78553ms for pod "etcd-embed-certs-436055" in "kube-system" namespace to be "Ready" ...
	E0729 02:14:57.898656   74243 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436055" hosting pod "etcd-embed-certs-436055" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436055" has status "Ready":"False"
	I0729 02:14:57.898661   74243 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-436055" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:57.902536   74243 pod_ready.go:97] node "embed-certs-436055" hosting pod "kube-apiserver-embed-certs-436055" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436055" has status "Ready":"False"
	I0729 02:14:57.902555   74243 pod_ready.go:81] duration metric: took 3.887541ms for pod "kube-apiserver-embed-certs-436055" in "kube-system" namespace to be "Ready" ...
	E0729 02:14:57.902562   74243 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436055" hosting pod "kube-apiserver-embed-certs-436055" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436055" has status "Ready":"False"
	I0729 02:14:57.902567   74243 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-436055" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:58.003946   74243 pod_ready.go:97] node "embed-certs-436055" hosting pod "kube-controller-manager-embed-certs-436055" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436055" has status "Ready":"False"
	I0729 02:14:58.003976   74243 pod_ready.go:81] duration metric: took 101.4016ms for pod "kube-controller-manager-embed-certs-436055" in "kube-system" namespace to be "Ready" ...
	E0729 02:14:58.003989   74243 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436055" hosting pod "kube-controller-manager-embed-certs-436055" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436055" has status "Ready":"False"
	I0729 02:14:58.003997   74243 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-24b8w" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:58.400986   74243 pod_ready.go:97] node "embed-certs-436055" hosting pod "kube-proxy-24b8w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436055" has status "Ready":"False"
	I0729 02:14:58.401029   74243 pod_ready.go:81] duration metric: took 397.02029ms for pod "kube-proxy-24b8w" in "kube-system" namespace to be "Ready" ...
	E0729 02:14:58.401043   74243 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436055" hosting pod "kube-proxy-24b8w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436055" has status "Ready":"False"
	I0729 02:14:58.401051   74243 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-436055" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:58.801554   74243 pod_ready.go:97] node "embed-certs-436055" hosting pod "kube-scheduler-embed-certs-436055" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436055" has status "Ready":"False"
	I0729 02:14:58.801584   74243 pod_ready.go:81] duration metric: took 400.525082ms for pod "kube-scheduler-embed-certs-436055" in "kube-system" namespace to be "Ready" ...
	E0729 02:14:58.801597   74243 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436055" hosting pod "kube-scheduler-embed-certs-436055" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436055" has status "Ready":"False"
	I0729 02:14:58.801605   74243 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace to be "Ready" ...
	I0729 02:14:59.201602   74243 pod_ready.go:97] node "embed-certs-436055" hosting pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436055" has status "Ready":"False"
	I0729 02:14:59.201635   74243 pod_ready.go:81] duration metric: took 400.020721ms for pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace to be "Ready" ...
	E0729 02:14:59.201648   74243 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-436055" hosting pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-436055" has status "Ready":"False"
	I0729 02:14:59.201657   74243 pod_ready.go:38] duration metric: took 1.316574434s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 02:14:59.201678   74243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 02:14:59.219748   74243 ops.go:34] apiserver oom_adj: -16
	I0729 02:14:59.219769   74243 kubeadm.go:597] duration metric: took 11.303136403s to restartPrimaryControlPlane
	I0729 02:14:59.219779   74243 kubeadm.go:394] duration metric: took 11.356293151s to StartCluster
	I0729 02:14:59.219798   74243 settings.go:142] acquiring lock: {Name:mkb5968d4cb7e70e3ab5eb9e0fafacd5f2b8ffad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 02:14:59.219876   74243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 02:14:59.221928   74243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/kubeconfig: {Name:mkfc86149281a82bb07035a854bdc5c590b97078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 02:14:59.222171   74243 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 02:14:59.222243   74243 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 02:14:59.222324   74243 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-436055"
	I0729 02:14:59.222349   74243 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-436055"
	W0729 02:14:59.222358   74243 addons.go:243] addon storage-provisioner should already be in state true
	I0729 02:14:59.222389   74243 host.go:66] Checking if "embed-certs-436055" exists ...
	I0729 02:14:59.222399   74243 addons.go:69] Setting metrics-server=true in profile "embed-certs-436055"
	I0729 02:14:59.222446   74243 addons.go:234] Setting addon metrics-server=true in "embed-certs-436055"
	I0729 02:14:59.222448   74243 config.go:182] Loaded profile config "embed-certs-436055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	W0729 02:14:59.222460   74243 addons.go:243] addon metrics-server should already be in state true
	I0729 02:14:59.222385   74243 addons.go:69] Setting default-storageclass=true in profile "embed-certs-436055"
	I0729 02:14:59.222502   74243 host.go:66] Checking if "embed-certs-436055" exists ...
	I0729 02:14:59.222556   74243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-436055"
	I0729 02:14:59.222740   74243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:14:59.222785   74243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:14:59.222911   74243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:14:59.222938   74243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:14:59.223016   74243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:14:59.223096   74243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:14:59.223914   74243 out.go:177] * Verifying Kubernetes components...
	I0729 02:14:59.225362   74243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 02:14:59.240566   74243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34423
	I0729 02:14:59.241224   74243 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:14:59.241535   74243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44511
	I0729 02:14:59.241873   74243 main.go:141] libmachine: Using API Version  1
	I0729 02:14:59.241899   74243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:14:59.242042   74243 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:14:59.242133   74243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38211
	I0729 02:14:59.242307   74243 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:14:59.242504   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetState
	I0729 02:14:59.242584   74243 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:14:59.242892   74243 main.go:141] libmachine: Using API Version  1
	I0729 02:14:59.242916   74243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:14:59.243214   74243 main.go:141] libmachine: Using API Version  1
	I0729 02:14:59.243231   74243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:14:59.243293   74243 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:14:59.243510   74243 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:14:59.243894   74243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:14:59.243936   74243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:14:59.244029   74243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:14:59.244049   74243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:14:59.247045   74243 addons.go:234] Setting addon default-storageclass=true in "embed-certs-436055"
	W0729 02:14:59.247090   74243 addons.go:243] addon default-storageclass should already be in state true
	I0729 02:14:59.247118   74243 host.go:66] Checking if "embed-certs-436055" exists ...
	I0729 02:14:59.247493   74243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:14:59.247520   74243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:14:59.260471   74243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45625
	I0729 02:14:59.261013   74243 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:14:59.261519   74243 main.go:141] libmachine: Using API Version  1
	I0729 02:14:59.261543   74243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:14:59.261893   74243 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:14:59.262094   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetState
	I0729 02:14:59.264058   74243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42797
	I0729 02:14:59.264238   74243 main.go:141] libmachine: (embed-certs-436055) Calling .DriverName
	I0729 02:14:59.264476   74243 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:14:59.265365   74243 main.go:141] libmachine: Using API Version  1
	I0729 02:14:59.265392   74243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:14:59.265730   74243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I0729 02:14:59.265740   74243 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:14:59.266032   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetState
	I0729 02:14:59.266128   74243 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:14:59.266486   74243 main.go:141] libmachine: Using API Version  1
	I0729 02:14:59.266506   74243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:14:59.266521   74243 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 02:14:59.268047   74243 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:14:59.268087   74243 main.go:141] libmachine: (embed-certs-436055) Calling .DriverName
	I0729 02:14:59.269575   74243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 02:14:59.269609   74243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 02:14:59.269812   74243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 02:14:59.271187   74243 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 02:14:59.271208   74243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 02:14:59.271227   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHHostname
	I0729 02:14:59.272064   74243 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 02:14:59.272077   74243 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 02:14:59.272104   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHHostname
	I0729 02:14:59.275206   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:59.275758   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:59.275777   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:59.275937   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHPort
	I0729 02:14:59.276163   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHKeyPath
	I0729 02:14:59.276306   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHUsername
	I0729 02:14:59.276410   74243 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/embed-certs-436055/id_rsa Username:docker}
	I0729 02:14:59.276522   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:59.276918   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:59.276942   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:59.277189   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHPort
	I0729 02:14:59.278648   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHKeyPath
	I0729 02:14:59.278820   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHUsername
	I0729 02:14:59.278961   74243 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/embed-certs-436055/id_rsa Username:docker}
	I0729 02:14:59.290206   74243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33973
	I0729 02:14:59.290955   74243 main.go:141] libmachine: () Calling .GetVersion
	I0729 02:14:59.291545   74243 main.go:141] libmachine: Using API Version  1
	I0729 02:14:59.291562   74243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 02:14:59.291979   74243 main.go:141] libmachine: () Calling .GetMachineName
	I0729 02:14:59.292172   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetState
	I0729 02:14:59.293939   74243 main.go:141] libmachine: (embed-certs-436055) Calling .DriverName
	I0729 02:14:59.294164   74243 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 02:14:59.294178   74243 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 02:14:59.294195   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHHostname
	I0729 02:14:59.297320   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:59.297663   74243 main.go:141] libmachine: (embed-certs-436055) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:63:b5", ip: ""} in network mk-embed-certs-436055: {Iface:virbr2 ExpiryTime:2024-07-29 03:14:34 +0000 UTC Type:0 Mac:52:54:00:00:63:b5 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:embed-certs-436055 Clientid:01:52:54:00:00:63:b5}
	I0729 02:14:59.297687   74243 main.go:141] libmachine: (embed-certs-436055) DBG | domain embed-certs-436055 has defined IP address 192.168.50.74 and MAC address 52:54:00:00:63:b5 in network mk-embed-certs-436055
	I0729 02:14:59.297830   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHPort
	I0729 02:14:59.298097   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHKeyPath
	I0729 02:14:59.298285   74243 main.go:141] libmachine: (embed-certs-436055) Calling .GetSSHUsername
	I0729 02:14:59.298494   74243 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/embed-certs-436055/id_rsa Username:docker}
	I0729 02:14:59.426916   74243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 02:14:59.448423   74243 node_ready.go:35] waiting up to 6m0s for node "embed-certs-436055" to be "Ready" ...
	I0729 02:14:59.504647   74243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 02:14:59.630508   74243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 02:14:59.630528   74243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 02:14:59.642754   74243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 02:14:59.666146   74243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 02:14:59.666172   74243 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 02:14:59.710181   74243 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 02:14:59.710210   74243 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 02:14:59.748347   74243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 02:15:00.570774   74243 main.go:141] libmachine: Making call to close driver server
	I0729 02:15:00.570793   74243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.066113024s)
	I0729 02:15:00.570801   74243 main.go:141] libmachine: (embed-certs-436055) Calling .Close
	I0729 02:15:00.570854   74243 main.go:141] libmachine: Making call to close driver server
	I0729 02:15:00.570926   74243 main.go:141] libmachine: (embed-certs-436055) Calling .Close
	I0729 02:15:00.571309   74243 main.go:141] libmachine: Successfully made call to close driver server
	I0729 02:15:00.571325   74243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 02:15:00.571336   74243 main.go:141] libmachine: Successfully made call to close driver server
	I0729 02:15:00.571346   74243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 02:15:00.571364   74243 main.go:141] libmachine: Making call to close driver server
	I0729 02:15:00.571369   74243 main.go:141] libmachine: (embed-certs-436055) DBG | Closing plugin on server side
	I0729 02:15:00.571377   74243 main.go:141] libmachine: (embed-certs-436055) Calling .Close
	I0729 02:15:00.571348   74243 main.go:141] libmachine: Making call to close driver server
	I0729 02:15:00.571426   74243 main.go:141] libmachine: (embed-certs-436055) Calling .Close
	I0729 02:15:00.571632   74243 main.go:141] libmachine: Successfully made call to close driver server
	I0729 02:15:00.571646   74243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 02:15:00.571719   74243 main.go:141] libmachine: Successfully made call to close driver server
	I0729 02:15:00.571734   74243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 02:15:00.579566   74243 main.go:141] libmachine: Making call to close driver server
	I0729 02:15:00.579589   74243 main.go:141] libmachine: (embed-certs-436055) Calling .Close
	I0729 02:15:00.579846   74243 main.go:141] libmachine: Successfully made call to close driver server
	I0729 02:15:00.579872   74243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 02:15:00.739321   74243 main.go:141] libmachine: Making call to close driver server
	I0729 02:15:00.739352   74243 main.go:141] libmachine: (embed-certs-436055) Calling .Close
	I0729 02:15:00.739626   74243 main.go:141] libmachine: (embed-certs-436055) DBG | Closing plugin on server side
	I0729 02:15:00.739710   74243 main.go:141] libmachine: Successfully made call to close driver server
	I0729 02:15:00.739730   74243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 02:15:00.739743   74243 main.go:141] libmachine: Making call to close driver server
	I0729 02:15:00.739756   74243 main.go:141] libmachine: (embed-certs-436055) Calling .Close
	I0729 02:15:00.739998   74243 main.go:141] libmachine: Successfully made call to close driver server
	I0729 02:15:00.740013   74243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 02:15:00.740027   74243 addons.go:475] Verifying addon metrics-server=true in "embed-certs-436055"
	I0729 02:15:00.742166   74243 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 02:15:00.743503   74243 addons.go:510] duration metric: took 1.521257495s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 02:14:56.063157   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:14:58.561381   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:00.562354   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:14:59.428708   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:14:59.928923   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:00.428235   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:00.928459   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:01.428438   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:01.928039   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:02.428623   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:02.928334   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:03.428277   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:03.928737   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:01.453367   74243 node_ready.go:53] node "embed-certs-436055" has status "Ready":"False"
	I0729 02:15:03.952570   74243 node_ready.go:53] node "embed-certs-436055" has status "Ready":"False"
	I0729 02:15:04.952831   74243 node_ready.go:49] node "embed-certs-436055" has status "Ready":"True"
	I0729 02:15:04.952859   74243 node_ready.go:38] duration metric: took 5.504403655s for node "embed-certs-436055" to be "Ready" ...
	I0729 02:15:04.952870   74243 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 02:15:04.958840   74243 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qc8gd" in "kube-system" namespace to be "Ready" ...
	I0729 02:15:04.963928   74243 pod_ready.go:92] pod "coredns-7db6d8ff4d-qc8gd" in "kube-system" namespace has status "Ready":"True"
	I0729 02:15:04.963945   74243 pod_ready.go:81] duration metric: took 5.080358ms for pod "coredns-7db6d8ff4d-qc8gd" in "kube-system" namespace to be "Ready" ...
	I0729 02:15:04.963954   74243 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-436055" in "kube-system" namespace to be "Ready" ...
	I0729 02:15:03.060828   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:05.061735   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:04.428281   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:04.928317   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:05.428234   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:05.928319   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:06.427987   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:06.928186   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:07.428843   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:07.928814   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:08.428362   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:08.928036   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:06.483570   74243 pod_ready.go:92] pod "etcd-embed-certs-436055" in "kube-system" namespace has status "Ready":"True"
	I0729 02:15:06.483594   74243 pod_ready.go:81] duration metric: took 1.519633111s for pod "etcd-embed-certs-436055" in "kube-system" namespace to be "Ready" ...
	I0729 02:15:06.483603   74243 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-436055" in "kube-system" namespace to be "Ready" ...
	I0729 02:15:07.489920   74243 pod_ready.go:92] pod "kube-apiserver-embed-certs-436055" in "kube-system" namespace has status "Ready":"True"
	I0729 02:15:07.489945   74243 pod_ready.go:81] duration metric: took 1.006335014s for pod "kube-apiserver-embed-certs-436055" in "kube-system" namespace to be "Ready" ...
	I0729 02:15:07.489959   74243 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-436055" in "kube-system" namespace to be "Ready" ...
	I0729 02:15:07.497928   74243 pod_ready.go:92] pod "kube-controller-manager-embed-certs-436055" in "kube-system" namespace has status "Ready":"True"
	I0729 02:15:07.497948   74243 pod_ready.go:81] duration metric: took 7.981829ms for pod "kube-controller-manager-embed-certs-436055" in "kube-system" namespace to be "Ready" ...
	I0729 02:15:07.497956   74243 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-24b8w" in "kube-system" namespace to be "Ready" ...
	I0729 02:15:07.503264   74243 pod_ready.go:92] pod "kube-proxy-24b8w" in "kube-system" namespace has status "Ready":"True"
	I0729 02:15:07.503282   74243 pod_ready.go:81] duration metric: took 5.321132ms for pod "kube-proxy-24b8w" in "kube-system" namespace to be "Ready" ...
	I0729 02:15:07.503292   74243 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-436055" in "kube-system" namespace to be "Ready" ...
	I0729 02:15:09.009583   74243 pod_ready.go:92] pod "kube-scheduler-embed-certs-436055" in "kube-system" namespace has status "Ready":"True"
	I0729 02:15:09.009604   74243 pod_ready.go:81] duration metric: took 1.506305368s for pod "kube-scheduler-embed-certs-436055" in "kube-system" namespace to be "Ready" ...
	I0729 02:15:09.009613   74243 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace to be "Ready" ...
	I0729 02:15:07.062540   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:09.560904   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:09.428884   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:09.928122   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:10.428656   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:10.928288   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:11.427985   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:11.927998   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:12.427914   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:12.928911   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:13.428320   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:13.928714   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:11.015396   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:13.016696   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:15.017247   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:11.563854   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:14.061607   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:14.428279   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:14.928906   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:15.428218   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:15.928019   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:16.428296   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:16.928521   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:17.428074   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:17.928163   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:18.428196   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:18.927947   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:17.515596   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:19.516004   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:16.061845   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:18.561060   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:19.428842   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:19.928028   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:20.427957   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:20.928270   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:21.428021   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:21.927965   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:22.428835   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:22.928241   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:23.428209   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:23.928393   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:21.517637   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:24.016375   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:21.062158   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:23.065466   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:25.561695   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:24.428187   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:24.928268   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:25.428014   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:25.928373   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:26.428554   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:26.928606   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:27.428307   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:27.928291   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:28.428266   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:28.928746   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:26.516485   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:29.015728   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:28.062001   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:30.561907   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:29.428627   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:29.927916   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:30.428081   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:30.928074   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:31.428213   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:31.928929   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:32.428716   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:32.928451   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:15:32.928538   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:15:32.975187   74868 cri.go:89] found id: ""
	I0729 02:15:32.975214   74868 logs.go:276] 0 containers: []
	W0729 02:15:32.975221   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:15:32.975227   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:15:32.975280   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:15:33.011191   74868 cri.go:89] found id: ""
	I0729 02:15:33.011221   74868 logs.go:276] 0 containers: []
	W0729 02:15:33.011232   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:15:33.011239   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:15:33.011297   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:15:33.048646   74868 cri.go:89] found id: ""
	I0729 02:15:33.048675   74868 logs.go:276] 0 containers: []
	W0729 02:15:33.048685   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:15:33.048692   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:15:33.048751   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:15:33.092620   74868 cri.go:89] found id: ""
	I0729 02:15:33.092647   74868 logs.go:276] 0 containers: []
	W0729 02:15:33.092656   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:15:33.092661   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:15:33.092708   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:15:33.130180   74868 cri.go:89] found id: ""
	I0729 02:15:33.130206   74868 logs.go:276] 0 containers: []
	W0729 02:15:33.130216   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:15:33.130223   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:15:33.130284   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:15:33.168306   74868 cri.go:89] found id: ""
	I0729 02:15:33.168334   74868 logs.go:276] 0 containers: []
	W0729 02:15:33.168345   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:15:33.168352   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:15:33.168412   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:15:33.210680   74868 cri.go:89] found id: ""
	I0729 02:15:33.210712   74868 logs.go:276] 0 containers: []
	W0729 02:15:33.210722   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:15:33.210730   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:15:33.210785   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:15:33.249497   74868 cri.go:89] found id: ""
	I0729 02:15:33.249524   74868 logs.go:276] 0 containers: []
	W0729 02:15:33.249533   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:15:33.249543   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:15:33.249557   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:15:33.299545   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:15:33.299580   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:15:33.313844   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:15:33.313878   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:15:33.458851   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:15:33.458876   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:15:33.458891   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:15:33.539011   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:15:33.539052   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:15:31.016521   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:33.017158   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:35.019540   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:33.062652   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:35.560644   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:36.082913   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:36.096779   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:15:36.096850   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:15:36.131542   74868 cri.go:89] found id: ""
	I0729 02:15:36.131566   74868 logs.go:276] 0 containers: []
	W0729 02:15:36.131574   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:15:36.131579   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:15:36.131629   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:15:36.173626   74868 cri.go:89] found id: ""
	I0729 02:15:36.173655   74868 logs.go:276] 0 containers: []
	W0729 02:15:36.173663   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:15:36.173668   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:15:36.173727   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:15:36.212380   74868 cri.go:89] found id: ""
	I0729 02:15:36.212408   74868 logs.go:276] 0 containers: []
	W0729 02:15:36.212415   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:15:36.212421   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:15:36.212482   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:15:36.252292   74868 cri.go:89] found id: ""
	I0729 02:15:36.252317   74868 logs.go:276] 0 containers: []
	W0729 02:15:36.252325   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:15:36.252330   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:15:36.252376   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:15:36.288216   74868 cri.go:89] found id: ""
	I0729 02:15:36.288246   74868 logs.go:276] 0 containers: []
	W0729 02:15:36.288254   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:15:36.288259   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:15:36.288307   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:15:36.322850   74868 cri.go:89] found id: ""
	I0729 02:15:36.322883   74868 logs.go:276] 0 containers: []
	W0729 02:15:36.322894   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:15:36.322901   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:15:36.322961   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:15:36.360405   74868 cri.go:89] found id: ""
	I0729 02:15:36.360446   74868 logs.go:276] 0 containers: []
	W0729 02:15:36.360456   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:15:36.360463   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:15:36.360531   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:15:36.399638   74868 cri.go:89] found id: ""
	I0729 02:15:36.399662   74868 logs.go:276] 0 containers: []
	W0729 02:15:36.399670   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:15:36.399678   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:15:36.399689   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:15:36.452975   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:15:36.453007   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:15:36.466413   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:15:36.466438   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:15:36.561084   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:15:36.561111   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:15:36.561128   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:15:36.642426   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:15:36.642461   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:15:37.516095   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:39.516605   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:37.561266   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:39.563536   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:39.182252   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:39.195983   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:15:39.196091   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:15:39.232444   74868 cri.go:89] found id: ""
	I0729 02:15:39.232473   74868 logs.go:276] 0 containers: []
	W0729 02:15:39.232482   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:15:39.232487   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:15:39.232538   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:15:39.267785   74868 cri.go:89] found id: ""
	I0729 02:15:39.267814   74868 logs.go:276] 0 containers: []
	W0729 02:15:39.267824   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:15:39.267832   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:15:39.267909   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:15:39.311794   74868 cri.go:89] found id: ""
	I0729 02:15:39.311816   74868 logs.go:276] 0 containers: []
	W0729 02:15:39.311823   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:15:39.311828   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:15:39.311886   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:15:39.348230   74868 cri.go:89] found id: ""
	I0729 02:15:39.348259   74868 logs.go:276] 0 containers: []
	W0729 02:15:39.348270   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:15:39.348277   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:15:39.348325   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:15:39.387855   74868 cri.go:89] found id: ""
	I0729 02:15:39.387880   74868 logs.go:276] 0 containers: []
	W0729 02:15:39.387887   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:15:39.387893   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:15:39.387951   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:15:39.425574   74868 cri.go:89] found id: ""
	I0729 02:15:39.425603   74868 logs.go:276] 0 containers: []
	W0729 02:15:39.425611   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:15:39.425619   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:15:39.425666   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:15:39.458522   74868 cri.go:89] found id: ""
	I0729 02:15:39.458548   74868 logs.go:276] 0 containers: []
	W0729 02:15:39.458556   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:15:39.458565   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:15:39.458614   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:15:39.493442   74868 cri.go:89] found id: ""
	I0729 02:15:39.493463   74868 logs.go:276] 0 containers: []
	W0729 02:15:39.493470   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:15:39.493478   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:15:39.493489   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:15:39.547906   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:15:39.547938   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:15:39.564290   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:15:39.564327   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:15:39.639880   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:15:39.639901   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:15:39.639916   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:15:39.722123   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:15:39.722158   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:15:42.265729   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:42.285315   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:15:42.285369   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:15:42.345627   74868 cri.go:89] found id: ""
	I0729 02:15:42.345653   74868 logs.go:276] 0 containers: []
	W0729 02:15:42.345664   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:15:42.345673   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:15:42.345733   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:15:42.407210   74868 cri.go:89] found id: ""
	I0729 02:15:42.407237   74868 logs.go:276] 0 containers: []
	W0729 02:15:42.407247   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:15:42.407254   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:15:42.407314   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:15:42.452856   74868 cri.go:89] found id: ""
	I0729 02:15:42.452879   74868 logs.go:276] 0 containers: []
	W0729 02:15:42.452886   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:15:42.452891   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:15:42.452954   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:15:42.498739   74868 cri.go:89] found id: ""
	I0729 02:15:42.498766   74868 logs.go:276] 0 containers: []
	W0729 02:15:42.498776   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:15:42.498788   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:15:42.498848   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:15:42.539397   74868 cri.go:89] found id: ""
	I0729 02:15:42.539424   74868 logs.go:276] 0 containers: []
	W0729 02:15:42.539432   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:15:42.539437   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:15:42.539481   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:15:42.575780   74868 cri.go:89] found id: ""
	I0729 02:15:42.575809   74868 logs.go:276] 0 containers: []
	W0729 02:15:42.575819   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:15:42.575826   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:15:42.575898   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:15:42.611564   74868 cri.go:89] found id: ""
	I0729 02:15:42.611593   74868 logs.go:276] 0 containers: []
	W0729 02:15:42.611605   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:15:42.611609   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:15:42.611656   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:15:42.650251   74868 cri.go:89] found id: ""
	I0729 02:15:42.650274   74868 logs.go:276] 0 containers: []
	W0729 02:15:42.650281   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:15:42.650289   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:15:42.650301   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:15:42.722735   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:15:42.722759   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:15:42.722776   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:15:42.801458   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:15:42.801493   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:15:42.838892   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:15:42.838931   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:15:42.894100   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:15:42.894141   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:15:42.016851   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:44.517124   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:42.061858   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:44.560608   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:45.408820   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:45.422889   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:15:45.422963   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:15:45.457649   74868 cri.go:89] found id: ""
	I0729 02:15:45.457678   74868 logs.go:276] 0 containers: []
	W0729 02:15:45.457688   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:15:45.457696   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:15:45.457759   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:15:45.496733   74868 cri.go:89] found id: ""
	I0729 02:15:45.496766   74868 logs.go:276] 0 containers: []
	W0729 02:15:45.496777   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:15:45.496785   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:15:45.496853   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:15:45.542241   74868 cri.go:89] found id: ""
	I0729 02:15:45.542275   74868 logs.go:276] 0 containers: []
	W0729 02:15:45.542285   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:15:45.542292   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:15:45.542352   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:15:45.580839   74868 cri.go:89] found id: ""
	I0729 02:15:45.580868   74868 logs.go:276] 0 containers: []
	W0729 02:15:45.580877   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:15:45.580882   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:15:45.580944   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:15:45.617640   74868 cri.go:89] found id: ""
	I0729 02:15:45.617668   74868 logs.go:276] 0 containers: []
	W0729 02:15:45.617679   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:15:45.617686   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:15:45.617747   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:15:45.653157   74868 cri.go:89] found id: ""
	I0729 02:15:45.653185   74868 logs.go:276] 0 containers: []
	W0729 02:15:45.653192   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:15:45.653198   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:15:45.653257   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:15:45.691184   74868 cri.go:89] found id: ""
	I0729 02:15:45.691219   74868 logs.go:276] 0 containers: []
	W0729 02:15:45.691231   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:15:45.691238   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:15:45.691297   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:15:45.725238   74868 cri.go:89] found id: ""
	I0729 02:15:45.725260   74868 logs.go:276] 0 containers: []
	W0729 02:15:45.725268   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:15:45.725275   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:15:45.725292   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:15:45.739575   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:15:45.739607   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:15:45.813477   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:15:45.813498   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:15:45.813509   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:15:45.894429   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:15:45.894463   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:15:45.935918   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:15:45.935945   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:15:48.487026   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:48.501450   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:15:48.501513   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:15:48.539396   74868 cri.go:89] found id: ""
	I0729 02:15:48.539423   74868 logs.go:276] 0 containers: []
	W0729 02:15:48.539443   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:15:48.539451   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:15:48.539523   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:15:48.576716   74868 cri.go:89] found id: ""
	I0729 02:15:48.576747   74868 logs.go:276] 0 containers: []
	W0729 02:15:48.576757   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:15:48.576765   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:15:48.576821   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:15:48.613248   74868 cri.go:89] found id: ""
	I0729 02:15:48.613277   74868 logs.go:276] 0 containers: []
	W0729 02:15:48.613287   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:15:48.613294   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:15:48.613355   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:15:48.649302   74868 cri.go:89] found id: ""
	I0729 02:15:48.649324   74868 logs.go:276] 0 containers: []
	W0729 02:15:48.649331   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:15:48.649337   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:15:48.649382   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:15:48.684885   74868 cri.go:89] found id: ""
	I0729 02:15:48.684928   74868 logs.go:276] 0 containers: []
	W0729 02:15:48.684943   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:15:48.684952   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:15:48.685015   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:15:48.719581   74868 cri.go:89] found id: ""
	I0729 02:15:48.719611   74868 logs.go:276] 0 containers: []
	W0729 02:15:48.719622   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:15:48.719629   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:15:48.719692   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:15:48.759546   74868 cri.go:89] found id: ""
	I0729 02:15:48.759572   74868 logs.go:276] 0 containers: []
	W0729 02:15:48.759580   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:15:48.759586   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:15:48.759636   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:15:48.794004   74868 cri.go:89] found id: ""
	I0729 02:15:48.794032   74868 logs.go:276] 0 containers: []
	W0729 02:15:48.794045   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:15:48.794054   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:15:48.794076   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:15:48.847367   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:15:48.847403   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:15:48.862339   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:15:48.862364   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:15:48.931606   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:15:48.931631   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:15:48.931644   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:15:49.016884   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:15:49.016917   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:15:47.015843   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:49.016483   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:46.560664   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:48.561882   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:51.558317   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:51.572555   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:15:51.572621   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:15:51.609266   74868 cri.go:89] found id: ""
	I0729 02:15:51.609289   74868 logs.go:276] 0 containers: []
	W0729 02:15:51.609296   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:15:51.609301   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:15:51.609348   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:15:51.643353   74868 cri.go:89] found id: ""
	I0729 02:15:51.643379   74868 logs.go:276] 0 containers: []
	W0729 02:15:51.643386   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:15:51.643391   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:15:51.643438   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:15:51.680374   74868 cri.go:89] found id: ""
	I0729 02:15:51.680403   74868 logs.go:276] 0 containers: []
	W0729 02:15:51.680429   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:15:51.680436   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:15:51.680497   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:15:51.714649   74868 cri.go:89] found id: ""
	I0729 02:15:51.714681   74868 logs.go:276] 0 containers: []
	W0729 02:15:51.714691   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:15:51.714698   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:15:51.714760   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:15:51.753996   74868 cri.go:89] found id: ""
	I0729 02:15:51.754029   74868 logs.go:276] 0 containers: []
	W0729 02:15:51.754038   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:15:51.754044   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:15:51.754091   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:15:51.787810   74868 cri.go:89] found id: ""
	I0729 02:15:51.787843   74868 logs.go:276] 0 containers: []
	W0729 02:15:51.787852   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:15:51.787860   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:15:51.787923   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:15:51.825217   74868 cri.go:89] found id: ""
	I0729 02:15:51.825239   74868 logs.go:276] 0 containers: []
	W0729 02:15:51.825247   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:15:51.825254   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:15:51.825302   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:15:51.859422   74868 cri.go:89] found id: ""
	I0729 02:15:51.859452   74868 logs.go:276] 0 containers: []
	W0729 02:15:51.859462   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:15:51.859472   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:15:51.859488   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:15:51.940013   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:15:51.940051   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:15:51.980549   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:15:51.980579   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:15:52.033207   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:15:52.033238   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:15:52.048051   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:15:52.048076   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:15:52.129341   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:15:51.516413   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:54.016806   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:51.061227   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:53.560519   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:55.561038   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:54.629841   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:54.643012   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:15:54.643110   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:15:54.683971   74868 cri.go:89] found id: ""
	I0729 02:15:54.683998   74868 logs.go:276] 0 containers: []
	W0729 02:15:54.684008   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:15:54.684015   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:15:54.684075   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:15:54.719021   74868 cri.go:89] found id: ""
	I0729 02:15:54.719052   74868 logs.go:276] 0 containers: []
	W0729 02:15:54.719074   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:15:54.719081   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:15:54.719139   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:15:54.754060   74868 cri.go:89] found id: ""
	I0729 02:15:54.754093   74868 logs.go:276] 0 containers: []
	W0729 02:15:54.754105   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:15:54.754112   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:15:54.754178   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:15:54.787986   74868 cri.go:89] found id: ""
	I0729 02:15:54.788019   74868 logs.go:276] 0 containers: []
	W0729 02:15:54.788029   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:15:54.788036   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:15:54.788094   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:15:54.824328   74868 cri.go:89] found id: ""
	I0729 02:15:54.824360   74868 logs.go:276] 0 containers: []
	W0729 02:15:54.824370   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:15:54.824377   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:15:54.824440   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:15:54.865395   74868 cri.go:89] found id: ""
	I0729 02:15:54.865418   74868 logs.go:276] 0 containers: []
	W0729 02:15:54.865425   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:15:54.865431   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:15:54.865486   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:15:54.903020   74868 cri.go:89] found id: ""
	I0729 02:15:54.903045   74868 logs.go:276] 0 containers: []
	W0729 02:15:54.903053   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:15:54.903079   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:15:54.903139   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:15:54.940787   74868 cri.go:89] found id: ""
	I0729 02:15:54.940808   74868 logs.go:276] 0 containers: []
	W0729 02:15:54.940816   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:15:54.940824   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:15:54.940840   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:15:54.995625   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:15:54.995659   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:15:55.010380   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:15:55.010417   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:15:55.082793   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:15:55.082818   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:15:55.082833   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:15:55.160177   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:15:55.160218   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:15:57.702817   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:15:57.716532   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:15:57.716619   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:15:57.750525   74868 cri.go:89] found id: ""
	I0729 02:15:57.750555   74868 logs.go:276] 0 containers: []
	W0729 02:15:57.750567   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:15:57.750575   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:15:57.750636   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:15:57.785548   74868 cri.go:89] found id: ""
	I0729 02:15:57.785575   74868 logs.go:276] 0 containers: []
	W0729 02:15:57.785583   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:15:57.785588   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:15:57.785633   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:15:57.826203   74868 cri.go:89] found id: ""
	I0729 02:15:57.826233   74868 logs.go:276] 0 containers: []
	W0729 02:15:57.826244   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:15:57.826251   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:15:57.826309   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:15:57.860249   74868 cri.go:89] found id: ""
	I0729 02:15:57.860276   74868 logs.go:276] 0 containers: []
	W0729 02:15:57.860286   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:15:57.860294   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:15:57.860354   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:15:57.897116   74868 cri.go:89] found id: ""
	I0729 02:15:57.897145   74868 logs.go:276] 0 containers: []
	W0729 02:15:57.897166   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:15:57.897174   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:15:57.897244   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:15:57.930531   74868 cri.go:89] found id: ""
	I0729 02:15:57.930554   74868 logs.go:276] 0 containers: []
	W0729 02:15:57.930561   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:15:57.930567   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:15:57.930612   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:15:57.968217   74868 cri.go:89] found id: ""
	I0729 02:15:57.968241   74868 logs.go:276] 0 containers: []
	W0729 02:15:57.968249   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:15:57.968255   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:15:57.968303   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:15:58.008331   74868 cri.go:89] found id: ""
	I0729 02:15:58.008355   74868 logs.go:276] 0 containers: []
	W0729 02:15:58.008363   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:15:58.008371   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:15:58.008382   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:15:58.062922   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:15:58.062963   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:15:58.077438   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:15:58.077463   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:15:58.150390   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:15:58.150411   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:15:58.150424   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:15:58.226175   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:15:58.226215   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:15:56.017300   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:58.516130   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:15:57.562483   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:00.063119   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:00.766435   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:00.779464   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:00.779537   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:00.814111   74868 cri.go:89] found id: ""
	I0729 02:16:00.814138   74868 logs.go:276] 0 containers: []
	W0729 02:16:00.814149   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:00.814155   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:00.814214   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:00.848414   74868 cri.go:89] found id: ""
	I0729 02:16:00.848445   74868 logs.go:276] 0 containers: []
	W0729 02:16:00.848455   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:00.848462   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:00.848525   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:00.883450   74868 cri.go:89] found id: ""
	I0729 02:16:00.883475   74868 logs.go:276] 0 containers: []
	W0729 02:16:00.883483   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:00.883488   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:00.883536   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:00.918344   74868 cri.go:89] found id: ""
	I0729 02:16:00.918373   74868 logs.go:276] 0 containers: []
	W0729 02:16:00.918381   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:00.918386   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:00.918432   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:00.953598   74868 cri.go:89] found id: ""
	I0729 02:16:00.953626   74868 logs.go:276] 0 containers: []
	W0729 02:16:00.953634   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:00.953640   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:00.953709   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:00.989082   74868 cri.go:89] found id: ""
	I0729 02:16:00.989113   74868 logs.go:276] 0 containers: []
	W0729 02:16:00.989124   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:00.989130   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:00.989189   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:01.029034   74868 cri.go:89] found id: ""
	I0729 02:16:01.029058   74868 logs.go:276] 0 containers: []
	W0729 02:16:01.029066   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:01.029071   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:01.029130   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:01.064876   74868 cri.go:89] found id: ""
	I0729 02:16:01.064906   74868 logs.go:276] 0 containers: []
	W0729 02:16:01.064916   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:01.064927   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:01.064943   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:01.115757   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:01.115787   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:01.129962   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:01.129993   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:01.204959   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:01.204979   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:01.204991   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:01.282769   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:01.282799   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:03.828027   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:03.843112   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:03.843201   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:03.884113   74868 cri.go:89] found id: ""
	I0729 02:16:03.884166   74868 logs.go:276] 0 containers: []
	W0729 02:16:03.884176   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:03.884184   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:03.884243   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:03.918747   74868 cri.go:89] found id: ""
	I0729 02:16:03.918777   74868 logs.go:276] 0 containers: []
	W0729 02:16:03.918787   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:03.918794   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:03.918856   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:03.953656   74868 cri.go:89] found id: ""
	I0729 02:16:03.953685   74868 logs.go:276] 0 containers: []
	W0729 02:16:03.953693   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:03.953698   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:03.953746   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:03.992814   74868 cri.go:89] found id: ""
	I0729 02:16:03.992841   74868 logs.go:276] 0 containers: []
	W0729 02:16:03.992850   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:03.992855   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:03.992907   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:04.029000   74868 cri.go:89] found id: ""
	I0729 02:16:04.029030   74868 logs.go:276] 0 containers: []
	W0729 02:16:04.029040   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:04.029045   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:04.029096   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:04.064851   74868 cri.go:89] found id: ""
	I0729 02:16:04.064878   74868 logs.go:276] 0 containers: []
	W0729 02:16:04.064888   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:04.064893   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:04.064940   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:04.103594   74868 cri.go:89] found id: ""
	I0729 02:16:04.103621   74868 logs.go:276] 0 containers: []
	W0729 02:16:04.103632   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:04.103641   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:04.103700   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:01.017059   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:03.017852   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:05.516643   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:02.562055   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:05.061570   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:04.138581   74868 cri.go:89] found id: ""
	I0729 02:16:04.138612   74868 logs.go:276] 0 containers: []
	W0729 02:16:04.138624   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:04.138636   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:04.138651   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:04.190722   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:04.190758   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:04.206639   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:04.206668   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:04.279264   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:04.279284   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:04.279298   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:04.358866   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:04.358913   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:06.901754   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:06.914822   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:06.914910   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:06.948190   74868 cri.go:89] found id: ""
	I0729 02:16:06.948215   74868 logs.go:276] 0 containers: []
	W0729 02:16:06.948229   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:06.948235   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:06.948280   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:06.981346   74868 cri.go:89] found id: ""
	I0729 02:16:06.981371   74868 logs.go:276] 0 containers: []
	W0729 02:16:06.981380   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:06.981385   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:06.981444   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:07.015716   74868 cri.go:89] found id: ""
	I0729 02:16:07.015742   74868 logs.go:276] 0 containers: []
	W0729 02:16:07.015753   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:07.015759   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:07.015817   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:07.053261   74868 cri.go:89] found id: ""
	I0729 02:16:07.053283   74868 logs.go:276] 0 containers: []
	W0729 02:16:07.053290   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:07.053296   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:07.053340   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:07.093306   74868 cri.go:89] found id: ""
	I0729 02:16:07.093330   74868 logs.go:276] 0 containers: []
	W0729 02:16:07.093340   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:07.093346   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:07.093401   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:07.128153   74868 cri.go:89] found id: ""
	I0729 02:16:07.128177   74868 logs.go:276] 0 containers: []
	W0729 02:16:07.128184   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:07.128189   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:07.128235   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:07.166724   74868 cri.go:89] found id: ""
	I0729 02:16:07.166749   74868 logs.go:276] 0 containers: []
	W0729 02:16:07.166757   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:07.166763   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:07.166807   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:07.201583   74868 cri.go:89] found id: ""
	I0729 02:16:07.201609   74868 logs.go:276] 0 containers: []
	W0729 02:16:07.201619   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:07.201629   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:07.201642   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:07.217455   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:07.217492   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:07.292273   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:07.292292   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:07.292305   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:07.369918   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:07.369953   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:07.410752   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:07.410789   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:08.015793   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:10.017696   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:07.062349   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:09.562274   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:09.965515   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:09.980100   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:09.980171   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:10.022720   74868 cri.go:89] found id: ""
	I0729 02:16:10.022788   74868 logs.go:276] 0 containers: []
	W0729 02:16:10.022803   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:10.022811   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:10.022867   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:10.058732   74868 cri.go:89] found id: ""
	I0729 02:16:10.058774   74868 logs.go:276] 0 containers: []
	W0729 02:16:10.058785   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:10.058792   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:10.058858   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:10.098811   74868 cri.go:89] found id: ""
	I0729 02:16:10.098839   74868 logs.go:276] 0 containers: []
	W0729 02:16:10.098850   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:10.098857   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:10.098920   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:10.138623   74868 cri.go:89] found id: ""
	I0729 02:16:10.138664   74868 logs.go:276] 0 containers: []
	W0729 02:16:10.138676   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:10.138684   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:10.138750   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:10.177424   74868 cri.go:89] found id: ""
	I0729 02:16:10.177450   74868 logs.go:276] 0 containers: []
	W0729 02:16:10.177457   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:10.177463   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:10.177526   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:10.213310   74868 cri.go:89] found id: ""
	I0729 02:16:10.213337   74868 logs.go:276] 0 containers: []
	W0729 02:16:10.213347   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:10.213358   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:10.213408   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:10.249595   74868 cri.go:89] found id: ""
	I0729 02:16:10.249623   74868 logs.go:276] 0 containers: []
	W0729 02:16:10.249635   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:10.249642   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:10.249689   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:10.283072   74868 cri.go:89] found id: ""
	I0729 02:16:10.283097   74868 logs.go:276] 0 containers: []
	W0729 02:16:10.283105   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:10.283113   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:10.283125   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:10.332663   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:10.332695   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:10.349112   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:10.349139   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:10.422690   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:10.422713   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:10.422724   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:10.509958   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:10.509993   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:13.055483   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:13.070217   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:13.070280   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:13.109942   74868 cri.go:89] found id: ""
	I0729 02:16:13.109967   74868 logs.go:276] 0 containers: []
	W0729 02:16:13.109976   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:13.109981   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:13.110035   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:13.146924   74868 cri.go:89] found id: ""
	I0729 02:16:13.146957   74868 logs.go:276] 0 containers: []
	W0729 02:16:13.146966   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:13.146974   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:13.147030   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:13.185377   74868 cri.go:89] found id: ""
	I0729 02:16:13.185405   74868 logs.go:276] 0 containers: []
	W0729 02:16:13.185416   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:13.185423   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:13.185476   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:13.225126   74868 cri.go:89] found id: ""
	I0729 02:16:13.225171   74868 logs.go:276] 0 containers: []
	W0729 02:16:13.225183   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:13.225190   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:13.225261   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:13.259134   74868 cri.go:89] found id: ""
	I0729 02:16:13.259165   74868 logs.go:276] 0 containers: []
	W0729 02:16:13.259175   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:13.259181   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:13.259245   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:13.294148   74868 cri.go:89] found id: ""
	I0729 02:16:13.294186   74868 logs.go:276] 0 containers: []
	W0729 02:16:13.294196   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:13.294207   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:13.294266   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:13.341548   74868 cri.go:89] found id: ""
	I0729 02:16:13.341569   74868 logs.go:276] 0 containers: []
	W0729 02:16:13.341576   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:13.341582   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:13.341626   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:13.382100   74868 cri.go:89] found id: ""
	I0729 02:16:13.382129   74868 logs.go:276] 0 containers: []
	W0729 02:16:13.382137   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:13.382145   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:13.382158   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:13.434996   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:13.435025   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:13.449451   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:13.449483   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:13.525964   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:13.525983   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:13.526000   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:13.605781   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:13.605813   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:12.515871   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:14.516353   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:12.061203   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:14.062442   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:16.146215   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:16.158916   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:16.158984   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:16.199765   74868 cri.go:89] found id: ""
	I0729 02:16:16.199795   74868 logs.go:276] 0 containers: []
	W0729 02:16:16.199806   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:16.199817   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:16.199878   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:16.235868   74868 cri.go:89] found id: ""
	I0729 02:16:16.235902   74868 logs.go:276] 0 containers: []
	W0729 02:16:16.235910   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:16.235915   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:16.235973   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:16.272212   74868 cri.go:89] found id: ""
	I0729 02:16:16.272235   74868 logs.go:276] 0 containers: []
	W0729 02:16:16.272246   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:16.272254   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:16.272306   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:16.309493   74868 cri.go:89] found id: ""
	I0729 02:16:16.309516   74868 logs.go:276] 0 containers: []
	W0729 02:16:16.309526   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:16.309533   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:16.309601   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:16.352974   74868 cri.go:89] found id: ""
	I0729 02:16:16.353003   74868 logs.go:276] 0 containers: []
	W0729 02:16:16.353015   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:16.353022   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:16.353088   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:16.393091   74868 cri.go:89] found id: ""
	I0729 02:16:16.393118   74868 logs.go:276] 0 containers: []
	W0729 02:16:16.393134   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:16.393144   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:16.393208   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:16.429169   74868 cri.go:89] found id: ""
	I0729 02:16:16.429194   74868 logs.go:276] 0 containers: []
	W0729 02:16:16.429205   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:16.429211   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:16.429271   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:16.470651   74868 cri.go:89] found id: ""
	I0729 02:16:16.470675   74868 logs.go:276] 0 containers: []
	W0729 02:16:16.470683   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:16.470690   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:16.470702   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:16.484140   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:16.484172   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:16.556907   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:16.556932   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:16.556944   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:16.634992   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:16.635029   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:16.672642   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:16.672675   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:17.016999   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:19.518154   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:16.562539   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:19.061307   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:19.223951   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:19.238119   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:19.238175   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:19.278106   74868 cri.go:89] found id: ""
	I0729 02:16:19.278129   74868 logs.go:276] 0 containers: []
	W0729 02:16:19.278137   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:19.278142   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:19.278189   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:19.312087   74868 cri.go:89] found id: ""
	I0729 02:16:19.312121   74868 logs.go:276] 0 containers: []
	W0729 02:16:19.312131   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:19.312140   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:19.312200   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:19.347262   74868 cri.go:89] found id: ""
	I0729 02:16:19.347294   74868 logs.go:276] 0 containers: []
	W0729 02:16:19.347301   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:19.347307   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:19.347354   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:19.383552   74868 cri.go:89] found id: ""
	I0729 02:16:19.383575   74868 logs.go:276] 0 containers: []
	W0729 02:16:19.383583   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:19.383588   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:19.383633   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:19.419391   74868 cri.go:89] found id: ""
	I0729 02:16:19.419412   74868 logs.go:276] 0 containers: []
	W0729 02:16:19.419419   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:19.419424   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:19.419472   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:19.458694   74868 cri.go:89] found id: ""
	I0729 02:16:19.458716   74868 logs.go:276] 0 containers: []
	W0729 02:16:19.458724   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:19.458730   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:19.458778   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:19.493873   74868 cri.go:89] found id: ""
	I0729 02:16:19.493902   74868 logs.go:276] 0 containers: []
	W0729 02:16:19.493913   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:19.493921   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:19.493978   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:19.536163   74868 cri.go:89] found id: ""
	I0729 02:16:19.536189   74868 logs.go:276] 0 containers: []
	W0729 02:16:19.536199   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:19.536210   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:19.536227   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:19.576630   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:19.576655   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:19.630069   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:19.630103   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:19.643713   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:19.643740   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:19.720899   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:19.720922   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:19.720936   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:22.311743   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:22.326388   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:22.326455   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:22.364954   74868 cri.go:89] found id: ""
	I0729 02:16:22.364979   74868 logs.go:276] 0 containers: []
	W0729 02:16:22.364987   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:22.364993   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:22.365058   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:22.406248   74868 cri.go:89] found id: ""
	I0729 02:16:22.406274   74868 logs.go:276] 0 containers: []
	W0729 02:16:22.406281   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:22.406286   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:22.406343   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:22.442357   74868 cri.go:89] found id: ""
	I0729 02:16:22.442385   74868 logs.go:276] 0 containers: []
	W0729 02:16:22.442396   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:22.442403   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:22.442466   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:22.478360   74868 cri.go:89] found id: ""
	I0729 02:16:22.478391   74868 logs.go:276] 0 containers: []
	W0729 02:16:22.478402   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:22.478409   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:22.478521   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:22.515289   74868 cri.go:89] found id: ""
	I0729 02:16:22.515315   74868 logs.go:276] 0 containers: []
	W0729 02:16:22.515325   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:22.515332   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:22.515389   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:22.550112   74868 cri.go:89] found id: ""
	I0729 02:16:22.550134   74868 logs.go:276] 0 containers: []
	W0729 02:16:22.550142   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:22.550147   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:22.550197   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:22.591369   74868 cri.go:89] found id: ""
	I0729 02:16:22.591399   74868 logs.go:276] 0 containers: []
	W0729 02:16:22.591410   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:22.591417   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:22.591481   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:22.634713   74868 cri.go:89] found id: ""
	I0729 02:16:22.634738   74868 logs.go:276] 0 containers: []
	W0729 02:16:22.634745   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:22.634759   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:22.634780   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:22.712599   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:22.712624   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:22.712636   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:22.798030   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:22.798069   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:22.854780   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:22.854805   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:22.907916   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:22.907948   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:22.016192   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:24.515460   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:21.065623   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:23.560623   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:25.562462   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:25.422864   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:25.436217   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:25.436282   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:25.469265   74868 cri.go:89] found id: ""
	I0729 02:16:25.469288   74868 logs.go:276] 0 containers: []
	W0729 02:16:25.469295   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:25.469301   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:25.469353   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:25.506724   74868 cri.go:89] found id: ""
	I0729 02:16:25.506753   74868 logs.go:276] 0 containers: []
	W0729 02:16:25.506761   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:25.506767   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:25.506816   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:25.553412   74868 cri.go:89] found id: ""
	I0729 02:16:25.553464   74868 logs.go:276] 0 containers: []
	W0729 02:16:25.553476   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:25.553484   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:25.553599   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:25.594547   74868 cri.go:89] found id: ""
	I0729 02:16:25.594570   74868 logs.go:276] 0 containers: []
	W0729 02:16:25.594578   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:25.594583   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:25.594629   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:25.630768   74868 cri.go:89] found id: ""
	I0729 02:16:25.630792   74868 logs.go:276] 0 containers: []
	W0729 02:16:25.630801   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:25.630806   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:25.630855   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:25.666100   74868 cri.go:89] found id: ""
	I0729 02:16:25.666132   74868 logs.go:276] 0 containers: []
	W0729 02:16:25.666143   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:25.666150   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:25.666213   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:25.701625   74868 cri.go:89] found id: ""
	I0729 02:16:25.701655   74868 logs.go:276] 0 containers: []
	W0729 02:16:25.701664   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:25.701669   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:25.701715   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:25.735097   74868 cri.go:89] found id: ""
	I0729 02:16:25.735125   74868 logs.go:276] 0 containers: []
	W0729 02:16:25.735132   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:25.735141   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:25.735153   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:25.809945   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:25.809967   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:25.809978   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:25.889432   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:25.889464   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:25.928332   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:25.928373   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:25.983165   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:25.983193   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:28.497904   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:28.513255   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:28.513331   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:28.563714   74868 cri.go:89] found id: ""
	I0729 02:16:28.563739   74868 logs.go:276] 0 containers: []
	W0729 02:16:28.563749   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:28.563756   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:28.563830   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:28.614206   74868 cri.go:89] found id: ""
	I0729 02:16:28.614235   74868 logs.go:276] 0 containers: []
	W0729 02:16:28.614244   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:28.614251   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:28.614308   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:28.654955   74868 cri.go:89] found id: ""
	I0729 02:16:28.654981   74868 logs.go:276] 0 containers: []
	W0729 02:16:28.654989   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:28.654994   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:28.655041   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:28.694917   74868 cri.go:89] found id: ""
	I0729 02:16:28.694946   74868 logs.go:276] 0 containers: []
	W0729 02:16:28.694956   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:28.694963   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:28.695020   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:28.737950   74868 cri.go:89] found id: ""
	I0729 02:16:28.737974   74868 logs.go:276] 0 containers: []
	W0729 02:16:28.737983   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:28.737988   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:28.738037   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:28.773110   74868 cri.go:89] found id: ""
	I0729 02:16:28.773136   74868 logs.go:276] 0 containers: []
	W0729 02:16:28.773146   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:28.773163   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:28.773224   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:28.807558   74868 cri.go:89] found id: ""
	I0729 02:16:28.807583   74868 logs.go:276] 0 containers: []
	W0729 02:16:28.807594   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:28.807600   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:28.807657   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:28.841645   74868 cri.go:89] found id: ""
	I0729 02:16:28.841667   74868 logs.go:276] 0 containers: []
	W0729 02:16:28.841682   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:28.841693   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:28.841709   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:28.891358   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:28.891388   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:28.905519   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:28.905547   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:28.982327   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:28.982350   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:28.982366   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:29.059496   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:29.059527   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:26.515870   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:28.516582   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:28.062287   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:30.561355   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:31.598042   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:31.611795   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:31.611859   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:31.645871   74868 cri.go:89] found id: ""
	I0729 02:16:31.645906   74868 logs.go:276] 0 containers: []
	W0729 02:16:31.645914   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:31.645920   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:31.645973   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:31.686904   74868 cri.go:89] found id: ""
	I0729 02:16:31.686936   74868 logs.go:276] 0 containers: []
	W0729 02:16:31.686947   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:31.686955   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:31.687028   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:31.721072   74868 cri.go:89] found id: ""
	I0729 02:16:31.721102   74868 logs.go:276] 0 containers: []
	W0729 02:16:31.721113   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:31.721120   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:31.721179   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:31.755141   74868 cri.go:89] found id: ""
	I0729 02:16:31.755166   74868 logs.go:276] 0 containers: []
	W0729 02:16:31.755174   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:31.755180   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:31.755226   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:31.788095   74868 cri.go:89] found id: ""
	I0729 02:16:31.788122   74868 logs.go:276] 0 containers: []
	W0729 02:16:31.788131   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:31.788137   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:31.788190   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:31.820250   74868 cri.go:89] found id: ""
	I0729 02:16:31.820273   74868 logs.go:276] 0 containers: []
	W0729 02:16:31.820281   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:31.820286   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:31.820331   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:31.854132   74868 cri.go:89] found id: ""
	I0729 02:16:31.854162   74868 logs.go:276] 0 containers: []
	W0729 02:16:31.854171   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:31.854179   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:31.854235   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:31.891447   74868 cri.go:89] found id: ""
	I0729 02:16:31.891473   74868 logs.go:276] 0 containers: []
	W0729 02:16:31.891481   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:31.891490   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:31.891502   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:31.943728   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:31.943765   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:31.957216   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:31.957250   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:32.029830   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:32.029858   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:32.029872   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:32.106332   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:32.106366   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:31.016349   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:33.515520   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:32.562169   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:34.571440   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:34.643878   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:34.657283   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:34.657350   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:34.693244   74868 cri.go:89] found id: ""
	I0729 02:16:34.693268   74868 logs.go:276] 0 containers: []
	W0729 02:16:34.693276   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:34.693281   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:34.693329   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:34.731405   74868 cri.go:89] found id: ""
	I0729 02:16:34.731433   74868 logs.go:276] 0 containers: []
	W0729 02:16:34.731444   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:34.731451   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:34.731514   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:34.769104   74868 cri.go:89] found id: ""
	I0729 02:16:34.769131   74868 logs.go:276] 0 containers: []
	W0729 02:16:34.769141   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:34.769148   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:34.769208   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:34.801788   74868 cri.go:89] found id: ""
	I0729 02:16:34.801821   74868 logs.go:276] 0 containers: []
	W0729 02:16:34.801830   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:34.801838   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:34.801898   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:34.836599   74868 cri.go:89] found id: ""
	I0729 02:16:34.836626   74868 logs.go:276] 0 containers: []
	W0729 02:16:34.836634   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:34.836639   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:34.836688   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:34.870922   74868 cri.go:89] found id: ""
	I0729 02:16:34.870944   74868 logs.go:276] 0 containers: []
	W0729 02:16:34.870952   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:34.870957   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:34.871017   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:34.905978   74868 cri.go:89] found id: ""
	I0729 02:16:34.906004   74868 logs.go:276] 0 containers: []
	W0729 02:16:34.906015   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:34.906023   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:34.906087   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:34.940145   74868 cri.go:89] found id: ""
	I0729 02:16:34.940185   74868 logs.go:276] 0 containers: []
	W0729 02:16:34.940195   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:34.940208   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:34.940221   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:35.017492   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:35.017524   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:35.057590   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:35.057619   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:35.107917   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:35.107951   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:35.122137   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:35.122165   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:35.190169   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:37.690856   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:37.704050   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:37.704116   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:37.738084   74868 cri.go:89] found id: ""
	I0729 02:16:37.738114   74868 logs.go:276] 0 containers: []
	W0729 02:16:37.738124   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:37.738157   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:37.738225   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:37.772170   74868 cri.go:89] found id: ""
	I0729 02:16:37.772194   74868 logs.go:276] 0 containers: []
	W0729 02:16:37.772202   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:37.772208   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:37.772258   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:37.806125   74868 cri.go:89] found id: ""
	I0729 02:16:37.806159   74868 logs.go:276] 0 containers: []
	W0729 02:16:37.806170   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:37.806178   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:37.806238   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:37.841202   74868 cri.go:89] found id: ""
	I0729 02:16:37.841230   74868 logs.go:276] 0 containers: []
	W0729 02:16:37.841241   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:37.841249   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:37.841311   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:37.878876   74868 cri.go:89] found id: ""
	I0729 02:16:37.878899   74868 logs.go:276] 0 containers: []
	W0729 02:16:37.878906   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:37.878912   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:37.878975   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:37.912232   74868 cri.go:89] found id: ""
	I0729 02:16:37.912261   74868 logs.go:276] 0 containers: []
	W0729 02:16:37.912270   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:37.912277   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:37.912335   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:37.949102   74868 cri.go:89] found id: ""
	I0729 02:16:37.949128   74868 logs.go:276] 0 containers: []
	W0729 02:16:37.949135   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:37.949141   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:37.949187   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:37.984112   74868 cri.go:89] found id: ""
	I0729 02:16:37.984139   74868 logs.go:276] 0 containers: []
	W0729 02:16:37.984149   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:37.984158   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:37.984168   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:38.036575   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:38.036607   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:38.050689   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:38.050715   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:38.119368   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:38.119387   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:38.119399   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:38.197449   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:38.197489   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:36.016614   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:38.516017   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:37.061018   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:39.061321   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:40.741365   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:40.754691   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:40.754768   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:40.799516   74868 cri.go:89] found id: ""
	I0729 02:16:40.799546   74868 logs.go:276] 0 containers: []
	W0729 02:16:40.799554   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:40.799560   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:40.799614   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:40.833815   74868 cri.go:89] found id: ""
	I0729 02:16:40.833839   74868 logs.go:276] 0 containers: []
	W0729 02:16:40.833847   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:40.833853   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:40.833907   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:40.868456   74868 cri.go:89] found id: ""
	I0729 02:16:40.868484   74868 logs.go:276] 0 containers: []
	W0729 02:16:40.868494   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:40.868499   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:40.868563   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:40.906548   74868 cri.go:89] found id: ""
	I0729 02:16:40.906577   74868 logs.go:276] 0 containers: []
	W0729 02:16:40.906586   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:40.906593   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:40.906652   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:40.941184   74868 cri.go:89] found id: ""
	I0729 02:16:40.941215   74868 logs.go:276] 0 containers: []
	W0729 02:16:40.941225   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:40.941231   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:40.941294   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:40.976631   74868 cri.go:89] found id: ""
	I0729 02:16:40.976657   74868 logs.go:276] 0 containers: []
	W0729 02:16:40.976666   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:40.976673   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:40.976736   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:41.008616   74868 cri.go:89] found id: ""
	I0729 02:16:41.008645   74868 logs.go:276] 0 containers: []
	W0729 02:16:41.008654   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:41.008662   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:41.008726   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:41.048330   74868 cri.go:89] found id: ""
	I0729 02:16:41.048353   74868 logs.go:276] 0 containers: []
	W0729 02:16:41.048362   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:41.048370   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:41.048382   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:41.089948   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:41.089981   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:41.140767   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:41.140796   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:41.154820   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:41.154845   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:41.227318   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:41.227344   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:41.227362   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:43.810887   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:43.823879   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:43.823965   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:43.859773   74868 cri.go:89] found id: ""
	I0729 02:16:43.859801   74868 logs.go:276] 0 containers: []
	W0729 02:16:43.859811   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:43.859819   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:43.859879   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:43.895139   74868 cri.go:89] found id: ""
	I0729 02:16:43.895167   74868 logs.go:276] 0 containers: []
	W0729 02:16:43.895177   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:43.895188   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:43.895242   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:43.930997   74868 cri.go:89] found id: ""
	I0729 02:16:43.931031   74868 logs.go:276] 0 containers: []
	W0729 02:16:43.931042   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:43.931049   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:43.931122   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:43.964423   74868 cri.go:89] found id: ""
	I0729 02:16:43.964448   74868 logs.go:276] 0 containers: []
	W0729 02:16:43.964456   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:43.964461   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:43.964507   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:43.997273   74868 cri.go:89] found id: ""
	I0729 02:16:43.997294   74868 logs.go:276] 0 containers: []
	W0729 02:16:43.997302   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:43.997307   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:43.997355   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:44.032749   74868 cri.go:89] found id: ""
	I0729 02:16:44.032778   74868 logs.go:276] 0 containers: []
	W0729 02:16:44.032788   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:44.032795   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:44.032856   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:44.073917   74868 cri.go:89] found id: ""
	I0729 02:16:44.073950   74868 logs.go:276] 0 containers: []
	W0729 02:16:44.073961   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:44.073971   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:44.074031   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:44.109288   74868 cri.go:89] found id: ""
	I0729 02:16:44.109317   74868 logs.go:276] 0 containers: []
	W0729 02:16:44.109328   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:44.109339   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:44.109356   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 02:16:41.016815   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:43.515847   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:45.516257   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:41.062423   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:43.560444   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:45.561461   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	W0729 02:16:44.182189   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:44.182214   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:44.182230   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:44.256230   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:44.256263   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:44.298091   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:44.298131   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:44.350755   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:44.350788   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:46.866034   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:46.879964   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:46.880027   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:46.919618   74868 cri.go:89] found id: ""
	I0729 02:16:46.919639   74868 logs.go:276] 0 containers: []
	W0729 02:16:46.919647   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:46.919652   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:46.919706   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:46.956657   74868 cri.go:89] found id: ""
	I0729 02:16:46.956681   74868 logs.go:276] 0 containers: []
	W0729 02:16:46.956689   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:46.956695   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:46.956749   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:46.992347   74868 cri.go:89] found id: ""
	I0729 02:16:46.992371   74868 logs.go:276] 0 containers: []
	W0729 02:16:46.992379   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:46.992384   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:46.992436   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:47.029804   74868 cri.go:89] found id: ""
	I0729 02:16:47.029831   74868 logs.go:276] 0 containers: []
	W0729 02:16:47.029841   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:47.029847   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:47.029893   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:47.069427   74868 cri.go:89] found id: ""
	I0729 02:16:47.069459   74868 logs.go:276] 0 containers: []
	W0729 02:16:47.069469   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:47.069476   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:47.069530   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:47.104887   74868 cri.go:89] found id: ""
	I0729 02:16:47.104917   74868 logs.go:276] 0 containers: []
	W0729 02:16:47.104925   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:47.104930   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:47.104976   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:47.139690   74868 cri.go:89] found id: ""
	I0729 02:16:47.139717   74868 logs.go:276] 0 containers: []
	W0729 02:16:47.139725   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:47.139730   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:47.139776   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:47.175690   74868 cri.go:89] found id: ""
	I0729 02:16:47.175716   74868 logs.go:276] 0 containers: []
	W0729 02:16:47.175724   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:47.175732   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:47.175743   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:47.227852   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:47.227885   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:47.242273   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:47.242307   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:47.311928   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:47.311948   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:47.311961   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:47.389929   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:47.389974   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:48.016774   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:50.017042   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:47.562387   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:50.062596   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:49.936239   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:49.949488   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:49.949542   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:49.984964   74868 cri.go:89] found id: ""
	I0729 02:16:49.984991   74868 logs.go:276] 0 containers: []
	W0729 02:16:49.985001   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:49.985009   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:49.985070   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:50.020679   74868 cri.go:89] found id: ""
	I0729 02:16:50.020704   74868 logs.go:276] 0 containers: []
	W0729 02:16:50.020715   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:50.020723   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:50.020783   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:50.054913   74868 cri.go:89] found id: ""
	I0729 02:16:50.054940   74868 logs.go:276] 0 containers: []
	W0729 02:16:50.054950   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:50.054957   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:50.055020   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:50.092908   74868 cri.go:89] found id: ""
	I0729 02:16:50.092932   74868 logs.go:276] 0 containers: []
	W0729 02:16:50.092944   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:50.092956   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:50.093005   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:50.133821   74868 cri.go:89] found id: ""
	I0729 02:16:50.133862   74868 logs.go:276] 0 containers: []
	W0729 02:16:50.133872   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:50.133879   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:50.133928   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:50.169827   74868 cri.go:89] found id: ""
	I0729 02:16:50.169857   74868 logs.go:276] 0 containers: []
	W0729 02:16:50.169869   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:50.169877   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:50.169977   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:50.211121   74868 cri.go:89] found id: ""
	I0729 02:16:50.211144   74868 logs.go:276] 0 containers: []
	W0729 02:16:50.211151   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:50.211157   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:50.211202   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:50.245374   74868 cri.go:89] found id: ""
	I0729 02:16:50.245404   74868 logs.go:276] 0 containers: []
	W0729 02:16:50.245412   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:50.245431   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:50.245447   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:50.302107   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:50.302138   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:50.317259   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:50.317284   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:50.383929   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:50.383951   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:50.383966   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:50.459178   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:50.459213   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:52.998313   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:53.011525   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:53.011588   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:53.047214   74868 cri.go:89] found id: ""
	I0729 02:16:53.047237   74868 logs.go:276] 0 containers: []
	W0729 02:16:53.047245   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:53.047251   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:53.047295   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:53.082080   74868 cri.go:89] found id: ""
	I0729 02:16:53.082105   74868 logs.go:276] 0 containers: []
	W0729 02:16:53.082112   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:53.082117   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:53.082175   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:53.116799   74868 cri.go:89] found id: ""
	I0729 02:16:53.116828   74868 logs.go:276] 0 containers: []
	W0729 02:16:53.116852   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:53.116857   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:53.116903   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:53.150994   74868 cri.go:89] found id: ""
	I0729 02:16:53.151020   74868 logs.go:276] 0 containers: []
	W0729 02:16:53.151028   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:53.151034   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:53.151101   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:53.186358   74868 cri.go:89] found id: ""
	I0729 02:16:53.186387   74868 logs.go:276] 0 containers: []
	W0729 02:16:53.186398   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:53.186406   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:53.186454   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:53.220378   74868 cri.go:89] found id: ""
	I0729 02:16:53.220400   74868 logs.go:276] 0 containers: []
	W0729 02:16:53.220407   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:53.220413   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:53.220460   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:53.255497   74868 cri.go:89] found id: ""
	I0729 02:16:53.255521   74868 logs.go:276] 0 containers: []
	W0729 02:16:53.255529   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:53.255534   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:53.255578   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:53.289711   74868 cri.go:89] found id: ""
	I0729 02:16:53.289739   74868 logs.go:276] 0 containers: []
	W0729 02:16:53.289749   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:53.289758   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:53.289773   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:53.340884   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:53.340916   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:53.355843   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:53.355870   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:53.429607   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:53.429633   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:53.429649   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:53.511667   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:53.511701   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:52.515826   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:55.016685   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:52.561183   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:54.561274   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:56.056430   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:56.071517   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:56.071584   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:56.104498   74868 cri.go:89] found id: ""
	I0729 02:16:56.104526   74868 logs.go:276] 0 containers: []
	W0729 02:16:56.104536   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:56.104543   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:56.104603   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:56.137657   74868 cri.go:89] found id: ""
	I0729 02:16:56.137685   74868 logs.go:276] 0 containers: []
	W0729 02:16:56.137694   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:56.137701   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:56.137761   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:56.171724   74868 cri.go:89] found id: ""
	I0729 02:16:56.171749   74868 logs.go:276] 0 containers: []
	W0729 02:16:56.171759   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:56.171766   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:56.171826   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:56.204553   74868 cri.go:89] found id: ""
	I0729 02:16:56.204581   74868 logs.go:276] 0 containers: []
	W0729 02:16:56.204591   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:56.204599   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:56.204661   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:56.238278   74868 cri.go:89] found id: ""
	I0729 02:16:56.238307   74868 logs.go:276] 0 containers: []
	W0729 02:16:56.238322   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:56.238329   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:56.238404   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:56.273967   74868 cri.go:89] found id: ""
	I0729 02:16:56.273995   74868 logs.go:276] 0 containers: []
	W0729 02:16:56.274006   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:56.274014   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:56.274065   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:56.311174   74868 cri.go:89] found id: ""
	I0729 02:16:56.311199   74868 logs.go:276] 0 containers: []
	W0729 02:16:56.311207   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:56.311217   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:56.311273   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:56.347966   74868 cri.go:89] found id: ""
	I0729 02:16:56.347992   74868 logs.go:276] 0 containers: []
	W0729 02:16:56.347999   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:56.348007   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:56.348023   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:56.424109   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:56.424129   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:56.424143   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:56.507414   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:56.507450   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:16:56.552315   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:56.552343   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:56.604240   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:56.604272   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:57.515947   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:59.516497   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:56.561715   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:59.060925   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:16:59.118596   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:16:59.131389   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:16:59.131461   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:16:59.169373   74868 cri.go:89] found id: ""
	I0729 02:16:59.169400   74868 logs.go:276] 0 containers: []
	W0729 02:16:59.169410   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:16:59.169416   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:16:59.169473   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:16:59.207929   74868 cri.go:89] found id: ""
	I0729 02:16:59.207958   74868 logs.go:276] 0 containers: []
	W0729 02:16:59.207967   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:16:59.207973   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:16:59.208026   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:16:59.242054   74868 cri.go:89] found id: ""
	I0729 02:16:59.242086   74868 logs.go:276] 0 containers: []
	W0729 02:16:59.242096   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:16:59.242103   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:16:59.242159   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:16:59.276706   74868 cri.go:89] found id: ""
	I0729 02:16:59.276735   74868 logs.go:276] 0 containers: []
	W0729 02:16:59.276743   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:16:59.276749   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:16:59.276798   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:16:59.315203   74868 cri.go:89] found id: ""
	I0729 02:16:59.315228   74868 logs.go:276] 0 containers: []
	W0729 02:16:59.315240   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:16:59.315247   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:16:59.315304   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:16:59.353878   74868 cri.go:89] found id: ""
	I0729 02:16:59.353904   74868 logs.go:276] 0 containers: []
	W0729 02:16:59.353916   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:16:59.353921   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:16:59.353981   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:16:59.397157   74868 cri.go:89] found id: ""
	I0729 02:16:59.397181   74868 logs.go:276] 0 containers: []
	W0729 02:16:59.397188   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:16:59.397193   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:16:59.397240   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:16:59.436389   74868 cri.go:89] found id: ""
	I0729 02:16:59.436416   74868 logs.go:276] 0 containers: []
	W0729 02:16:59.436423   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:16:59.436432   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:16:59.436451   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:16:59.492108   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:16:59.492149   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:16:59.505937   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:16:59.505964   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:16:59.577872   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:16:59.577894   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:16:59.577907   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:16:59.658872   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:16:59.658912   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:17:02.204800   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:17:02.219357   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:17:02.219417   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:17:02.257792   74868 cri.go:89] found id: ""
	I0729 02:17:02.257816   74868 logs.go:276] 0 containers: []
	W0729 02:17:02.257823   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:17:02.257851   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:17:02.257901   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:17:02.303568   74868 cri.go:89] found id: ""
	I0729 02:17:02.303602   74868 logs.go:276] 0 containers: []
	W0729 02:17:02.303609   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:17:02.303614   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:17:02.303682   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:17:02.339777   74868 cri.go:89] found id: ""
	I0729 02:17:02.339809   74868 logs.go:276] 0 containers: []
	W0729 02:17:02.339820   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:17:02.339827   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:17:02.339885   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:17:02.378980   74868 cri.go:89] found id: ""
	I0729 02:17:02.379010   74868 logs.go:276] 0 containers: []
	W0729 02:17:02.379022   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:17:02.379029   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:17:02.379113   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:17:02.416364   74868 cri.go:89] found id: ""
	I0729 02:17:02.416395   74868 logs.go:276] 0 containers: []
	W0729 02:17:02.416404   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:17:02.416410   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:17:02.416455   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:17:02.456069   74868 cri.go:89] found id: ""
	I0729 02:17:02.456097   74868 logs.go:276] 0 containers: []
	W0729 02:17:02.456107   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:17:02.456114   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:17:02.456169   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:17:02.491248   74868 cri.go:89] found id: ""
	I0729 02:17:02.491280   74868 logs.go:276] 0 containers: []
	W0729 02:17:02.491289   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:17:02.491296   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:17:02.491348   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:17:02.528777   74868 cri.go:89] found id: ""
	I0729 02:17:02.528810   74868 logs.go:276] 0 containers: []
	W0729 02:17:02.528822   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:17:02.528834   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:17:02.528851   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:17:02.613603   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:17:02.613631   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:17:02.613647   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:17:02.694168   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:17:02.694202   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:17:02.737756   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:17:02.737786   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:17:02.791470   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:17:02.791510   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:17:02.016660   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:17:04.517008   74243 pod_ready.go:102] pod "metrics-server-569cc877fc-m9nnh" in "kube-system" namespace has status "Ready":"False"
	I0729 02:17:01.061620   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:17:03.561634   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:17:05.562859   74477 pod_ready.go:102] pod "metrics-server-78fcd8795b-4cpr8" in "kube-system" namespace has status "Ready":"False"
	I0729 02:17:05.306692   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:17:05.320762   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:17:05.320862   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:17:05.356756   74868 cri.go:89] found id: ""
	I0729 02:17:05.356787   74868 logs.go:276] 0 containers: []
	W0729 02:17:05.356799   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:17:05.356808   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:17:05.356881   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:17:05.391279   74868 cri.go:89] found id: ""
	I0729 02:17:05.391307   74868 logs.go:276] 0 containers: []
	W0729 02:17:05.391317   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:17:05.391324   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:17:05.391383   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:17:05.426001   74868 cri.go:89] found id: ""
	I0729 02:17:05.426030   74868 logs.go:276] 0 containers: []
	W0729 02:17:05.426041   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:17:05.426049   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:17:05.426105   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:17:05.461827   74868 cri.go:89] found id: ""
	I0729 02:17:05.461858   74868 logs.go:276] 0 containers: []
	W0729 02:17:05.461866   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:17:05.461871   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:17:05.461923   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:17:05.495906   74868 cri.go:89] found id: ""
	I0729 02:17:05.495928   74868 logs.go:276] 0 containers: []
	W0729 02:17:05.495936   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:17:05.495942   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:17:05.495989   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:17:05.535738   74868 cri.go:89] found id: ""
	I0729 02:17:05.535767   74868 logs.go:276] 0 containers: []
	W0729 02:17:05.535777   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:17:05.535783   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:17:05.535853   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:17:05.572020   74868 cri.go:89] found id: ""
	I0729 02:17:05.572047   74868 logs.go:276] 0 containers: []
	W0729 02:17:05.572057   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:17:05.572065   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:17:05.572134   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:17:05.611875   74868 cri.go:89] found id: ""
	I0729 02:17:05.611913   74868 logs.go:276] 0 containers: []
	W0729 02:17:05.611923   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:17:05.611932   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:17:05.611947   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:17:05.648906   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:17:05.648932   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:17:05.700419   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:17:05.700455   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:17:05.715020   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:17:05.715049   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:17:05.786726   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:17:05.786749   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:17:05.786765   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:17:08.363879   74868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 02:17:08.379989   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:17:08.380066   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:17:08.422510   74868 cri.go:89] found id: ""
	I0729 02:17:08.422536   74868 logs.go:276] 0 containers: []
	W0729 02:17:08.422544   74868 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:17:08.422549   74868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:17:08.422613   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:17:08.466166   74868 cri.go:89] found id: ""
	I0729 02:17:08.466194   74868 logs.go:276] 0 containers: []
	W0729 02:17:08.466215   74868 logs.go:278] No container was found matching "etcd"
	I0729 02:17:08.466223   74868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:17:08.466287   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:17:08.505081   74868 cri.go:89] found id: ""
	I0729 02:17:08.505108   74868 logs.go:276] 0 containers: []
	W0729 02:17:08.505116   74868 logs.go:278] No container was found matching "coredns"
	I0729 02:17:08.505121   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:17:08.505170   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:17:08.548417   74868 cri.go:89] found id: ""
	I0729 02:17:08.548441   74868 logs.go:276] 0 containers: []
	W0729 02:17:08.548448   74868 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:17:08.548454   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:17:08.548514   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:17:08.592697   74868 cri.go:89] found id: ""
	I0729 02:17:08.592719   74868 logs.go:276] 0 containers: []
	W0729 02:17:08.592728   74868 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:17:08.592735   74868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:17:08.592793   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:17:08.632924   74868 cri.go:89] found id: ""
	I0729 02:17:08.632947   74868 logs.go:276] 0 containers: []
	W0729 02:17:08.632958   74868 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:17:08.632965   74868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:17:08.633024   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:17:08.669322   74868 cri.go:89] found id: ""
	I0729 02:17:08.669350   74868 logs.go:276] 0 containers: []
	W0729 02:17:08.669359   74868 logs.go:278] No container was found matching "kindnet"
	I0729 02:17:08.669366   74868 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 02:17:08.669421   74868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 02:17:08.707569   74868 cri.go:89] found id: ""
	I0729 02:17:08.707597   74868 logs.go:276] 0 containers: []
	W0729 02:17:08.707607   74868 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 02:17:08.707617   74868 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:17:08.707630   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 02:17:08.800581   74868 logs.go:123] Gathering logs for container status ...
	I0729 02:17:08.800622   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:17:08.841537   74868 logs.go:123] Gathering logs for kubelet ...
	I0729 02:17:08.841564   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:17:08.893800   74868 logs.go:123] Gathering logs for dmesg ...
	I0729 02:17:08.893838   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:17:08.907637   74868 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:17:08.907668   74868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:17:08.986829   74868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:17:08.343330   67284 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.000370121s
	I0729 02:17:08.343389   67284 kubeadm.go:310] 
	I0729 02:17:08.343456   67284 kubeadm.go:310] Unfortunately, an error has occurred:
	I0729 02:17:08.343504   67284 kubeadm.go:310] 	context deadline exceeded
	I0729 02:17:08.343510   67284 kubeadm.go:310] 
	I0729 02:17:08.343558   67284 kubeadm.go:310] This error is likely caused by:
	I0729 02:17:08.343604   67284 kubeadm.go:310] 	- The kubelet is not running
	I0729 02:17:08.343758   67284 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 02:17:08.343789   67284 kubeadm.go:310] 
	I0729 02:17:08.343962   67284 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 02:17:08.344018   67284 kubeadm.go:310] 	- 'systemctl status kubelet'
	I0729 02:17:08.344060   67284 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I0729 02:17:08.344070   67284 kubeadm.go:310] 
	I0729 02:17:08.344190   67284 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 02:17:08.344299   67284 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 02:17:08.344411   67284 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0729 02:17:08.344565   67284 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 02:17:08.344683   67284 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0729 02:17:08.344791   67284 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I0729 02:17:08.346067   67284 kubeadm.go:310] W0729 02:13:06.280424   10519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 02:17:08.346470   67284 kubeadm.go:310] W0729 02:13:06.281184   10519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 02:17:08.346620   67284 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 02:17:08.346777   67284 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I0729 02:17:08.346922   67284 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 02:17:08.346943   67284 kubeadm.go:394] duration metric: took 12m11.665280125s to StartCluster
	I0729 02:17:08.346995   67284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 02:17:08.347082   67284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 02:17:08.401027   67284 cri.go:89] found id: ""
	I0729 02:17:08.401054   67284 logs.go:276] 0 containers: []
	W0729 02:17:08.401060   67284 logs.go:278] No container was found matching "kube-apiserver"
	I0729 02:17:08.401067   67284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 02:17:08.401120   67284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 02:17:08.447249   67284 cri.go:89] found id: "ae6117afe7595bbde67432b3ece53e8bb46dc9adbb91d178698d965957ba40ac"
	I0729 02:17:08.447275   67284 cri.go:89] found id: ""
	I0729 02:17:08.447284   67284 logs.go:276] 1 containers: [ae6117afe7595bbde67432b3ece53e8bb46dc9adbb91d178698d965957ba40ac]
	I0729 02:17:08.447344   67284 ssh_runner.go:195] Run: which crictl
	I0729 02:17:08.452656   67284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 02:17:08.452716   67284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 02:17:08.493632   67284 cri.go:89] found id: ""
	I0729 02:17:08.493657   67284 logs.go:276] 0 containers: []
	W0729 02:17:08.493667   67284 logs.go:278] No container was found matching "coredns"
	I0729 02:17:08.493673   67284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 02:17:08.493846   67284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 02:17:08.536453   67284 cri.go:89] found id: ""
	I0729 02:17:08.536480   67284 logs.go:276] 0 containers: []
	W0729 02:17:08.536490   67284 logs.go:278] No container was found matching "kube-scheduler"
	I0729 02:17:08.536497   67284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 02:17:08.536557   67284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 02:17:08.590091   67284 cri.go:89] found id: ""
	I0729 02:17:08.590121   67284 logs.go:276] 0 containers: []
	W0729 02:17:08.590136   67284 logs.go:278] No container was found matching "kube-proxy"
	I0729 02:17:08.590144   67284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 02:17:08.590207   67284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 02:17:08.628010   67284 cri.go:89] found id: ""
	I0729 02:17:08.628038   67284 logs.go:276] 0 containers: []
	W0729 02:17:08.628048   67284 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 02:17:08.628056   67284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 02:17:08.628132   67284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 02:17:08.665862   67284 cri.go:89] found id: ""
	I0729 02:17:08.665894   67284 logs.go:276] 0 containers: []
	W0729 02:17:08.665904   67284 logs.go:278] No container was found matching "kindnet"
	I0729 02:17:08.665909   67284 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 02:17:08.665972   67284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 02:17:08.703325   67284 cri.go:89] found id: ""
	I0729 02:17:08.703356   67284 logs.go:276] 0 containers: []
	W0729 02:17:08.703367   67284 logs.go:278] No container was found matching "storage-provisioner"
	I0729 02:17:08.703378   67284 logs.go:123] Gathering logs for container status ...
	I0729 02:17:08.703392   67284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 02:17:08.750806   67284 logs.go:123] Gathering logs for kubelet ...
	I0729 02:17:08.750834   67284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 02:17:08.967023   67284 logs.go:123] Gathering logs for dmesg ...
	I0729 02:17:08.967090   67284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 02:17:08.984636   67284 logs.go:123] Gathering logs for describe nodes ...
	I0729 02:17:08.984664   67284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 02:17:09.073462   67284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 02:17:09.073483   67284 logs.go:123] Gathering logs for etcd [ae6117afe7595bbde67432b3ece53e8bb46dc9adbb91d178698d965957ba40ac] ...
	I0729 02:17:09.073495   67284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae6117afe7595bbde67432b3ece53e8bb46dc9adbb91d178698d965957ba40ac"
	I0729 02:17:09.125102   67284 logs.go:123] Gathering logs for CRI-O ...
	I0729 02:17:09.125131   67284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0729 02:17:09.294136   67284 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001179259s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000370121s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0729 02:13:06.280424   10519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0729 02:13:06.281184   10519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 02:17:09.294186   67284 out.go:239] * 
	W0729 02:17:09.294246   67284 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001179259s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000370121s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0729 02:13:06.280424   10519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0729 02:13:06.281184   10519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 02:17:09.294272   67284 out.go:239] * 
	W0729 02:17:09.295048   67284 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 02:17:09.297866   67284 out.go:177] 
	W0729 02:17:09.299166   67284 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001179259s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000370121s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0729 02:13:06.280424   10519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0729 02:13:06.281184   10519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 02:17:09.299212   67284 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 02:17:09.299240   67284 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 02:17:09.300992   67284 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.241521762Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722219430241458334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb446a0d-5a11-4595-88b2-003dde534e7b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.242149717Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55ebae01-038a-4068-969b-de8c6f5ebe5f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.242199386Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55ebae01-038a-4068-969b-de8c6f5ebe5f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.242253638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae6117afe7595bbde67432b3ece53e8bb46dc9adbb91d178698d965957ba40ac,PodSandboxId:ef3df721edb885cc4c186989f327642ec86d816b76cbf1ff67351c68916649d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722219188680767581,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-211243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ca58f17edeb0e54a21947f792e1e27,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55ebae01-038a-4068-969b-de8c6f5ebe5f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.277936642Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=80a641b9-442c-42b2-989b-a3f54ca06a25 name=/runtime.v1.RuntimeService/Version
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.278076486Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=80a641b9-442c-42b2-989b-a3f54ca06a25 name=/runtime.v1.RuntimeService/Version
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.279615310Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7bf8cecb-abea-47d4-83da-7d126d0e1925 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.280050249Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722219430279943043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7bf8cecb-abea-47d4-83da-7d126d0e1925 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.281677166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd842c4f-42d3-4371-8c41-2d4261975de0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.281728017Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd842c4f-42d3-4371-8c41-2d4261975de0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.281785209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae6117afe7595bbde67432b3ece53e8bb46dc9adbb91d178698d965957ba40ac,PodSandboxId:ef3df721edb885cc4c186989f327642ec86d816b76cbf1ff67351c68916649d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722219188680767581,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-211243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ca58f17edeb0e54a21947f792e1e27,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd842c4f-42d3-4371-8c41-2d4261975de0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.316116789Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a73996fc-1b30-4b47-983a-6abf160392b2 name=/runtime.v1.RuntimeService/Version
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.316200054Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a73996fc-1b30-4b47-983a-6abf160392b2 name=/runtime.v1.RuntimeService/Version
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.317373087Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a13e6192-2d2e-4ee6-8295-37756a64327e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.317719943Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722219430317695826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a13e6192-2d2e-4ee6-8295-37756a64327e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.318345175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e8a0822-13e9-4291-9a3a-d1dd9e307440 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.318416679Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e8a0822-13e9-4291-9a3a-d1dd9e307440 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.318475855Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae6117afe7595bbde67432b3ece53e8bb46dc9adbb91d178698d965957ba40ac,PodSandboxId:ef3df721edb885cc4c186989f327642ec86d816b76cbf1ff67351c68916649d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722219188680767581,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-211243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ca58f17edeb0e54a21947f792e1e27,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e8a0822-13e9-4291-9a3a-d1dd9e307440 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.351350799Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7daafd03-7cf5-49b5-a4be-5a5daf4bf5c1 name=/runtime.v1.RuntimeService/Version
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.351448464Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7daafd03-7cf5-49b5-a4be-5a5daf4bf5c1 name=/runtime.v1.RuntimeService/Version
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.352938627Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4cd90750-2523-40cb-a062-fe5c8a0d3c19 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.353515834Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722219430353489689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4cd90750-2523-40cb-a062-fe5c8a0d3c19 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.354194005Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de0d37d7-2b0e-49bd-b1a4-363b38e8ca71 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.354271065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de0d37d7-2b0e-49bd-b1a4-363b38e8ca71 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:17:10 kubernetes-upgrade-211243 crio[3154]: time="2024-07-29 02:17:10.354353497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae6117afe7595bbde67432b3ece53e8bb46dc9adbb91d178698d965957ba40ac,PodSandboxId:ef3df721edb885cc4c186989f327642ec86d816b76cbf1ff67351c68916649d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722219188680767581,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-211243,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21ca58f17edeb0e54a21947f792e1e27,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de0d37d7-2b0e-49bd-b1a4-363b38e8ca71 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	ae6117afe7595       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   4 minutes ago       Running             etcd                4                   ef3df721edb88       etcd-kubernetes-upgrade-211243
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.271870] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.061581] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061793] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.210246] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.129037] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.310996] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +4.435697] systemd-fstab-generator[736]: Ignoring "noauto" option for root device
	[  +0.062700] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.572036] systemd-fstab-generator[857]: Ignoring "noauto" option for root device
	[Jul29 02:03] systemd-fstab-generator[1246]: Ignoring "noauto" option for root device
	[  +0.094161] kauditd_printk_skb: 97 callbacks suppressed
	[ +15.410975] kauditd_printk_skb: 107 callbacks suppressed
	[  +1.298465] systemd-fstab-generator[2784]: Ignoring "noauto" option for root device
	[  +0.350951] systemd-fstab-generator[2894]: Ignoring "noauto" option for root device
	[  +0.297490] systemd-fstab-generator[2945]: Ignoring "noauto" option for root device
	[  +0.230259] systemd-fstab-generator[2963]: Ignoring "noauto" option for root device
	[  +0.362783] systemd-fstab-generator[2991]: Ignoring "noauto" option for root device
	[Jul29 02:04] systemd-fstab-generator[3289]: Ignoring "noauto" option for root device
	[  +0.103191] kauditd_printk_skb: 210 callbacks suppressed
	[  +3.104693] systemd-fstab-generator[3820]: Ignoring "noauto" option for root device
	[Jul29 02:09] kauditd_printk_skb: 112 callbacks suppressed
	[  +2.437531] systemd-fstab-generator[10159]: Ignoring "noauto" option for root device
	[Jul29 02:13] kauditd_printk_skb: 73 callbacks suppressed
	[  +1.336249] systemd-fstab-generator[10546]: Ignoring "noauto" option for root device
	
	
	==> etcd [ae6117afe7595bbde67432b3ece53e8bb46dc9adbb91d178698d965957ba40ac] <==
	{"level":"info","ts":"2024-07-29T02:13:08.842146Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T02:13:08.842409Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"5dc86d5b75c1766b","initial-advertise-peer-urls":["https://192.168.61.63:2380"],"listen-peer-urls":["https://192.168.61.63:2380"],"advertise-client-urls":["https://192.168.61.63:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.63:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T02:13:08.842456Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T02:13:08.842883Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.61.63:2380"}
	{"level":"info","ts":"2024-07-29T02:13:08.842959Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.61.63:2380"}
	{"level":"info","ts":"2024-07-29T02:13:09.620961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5dc86d5b75c1766b is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T02:13:09.621144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5dc86d5b75c1766b became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T02:13:09.621186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5dc86d5b75c1766b received MsgPreVoteResp from 5dc86d5b75c1766b at term 1"}
	{"level":"info","ts":"2024-07-29T02:13:09.621219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5dc86d5b75c1766b became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T02:13:09.621246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5dc86d5b75c1766b received MsgVoteResp from 5dc86d5b75c1766b at term 2"}
	{"level":"info","ts":"2024-07-29T02:13:09.621277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5dc86d5b75c1766b became leader at term 2"}
	{"level":"info","ts":"2024-07-29T02:13:09.621306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5dc86d5b75c1766b elected leader 5dc86d5b75c1766b at term 2"}
	{"level":"info","ts":"2024-07-29T02:13:09.624309Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T02:13:09.625468Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"5dc86d5b75c1766b","local-member-attributes":"{Name:kubernetes-upgrade-211243 ClientURLs:[https://192.168.61.63:2379]}","request-path":"/0/members/5dc86d5b75c1766b/attributes","cluster-id":"ba06178c7a8b2eee","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T02:13:09.62585Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T02:13:09.626069Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ba06178c7a8b2eee","local-member-id":"5dc86d5b75c1766b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T02:13:09.626169Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T02:13:09.626228Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T02:13:09.626257Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T02:13:09.627314Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T02:13:09.628076Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T02:13:09.628112Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T02:13:09.628076Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.63:2379"}
	{"level":"info","ts":"2024-07-29T02:13:09.628599Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T02:13:09.629444Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 02:17:10 up 14 min,  0 users,  load average: 0.09, 0.10, 0.09
	Linux kubernetes-upgrade-211243 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 02:16:57 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:16:57.705281   10553 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.61.63:8443: connect: connection refused" logger="UnhandledError"
	Jul 29 02:16:57 kubernetes-upgrade-211243 kubelet[10553]: I0729 02:16:57.861862   10553 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-211243"
	Jul 29 02:16:57 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:16:57.862684   10553 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.63:8443: connect: connection refused" node="kubernetes-upgrade-211243"
	Jul 29 02:16:58 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:16:58.110303   10553 eviction_manager.go:283] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"kubernetes-upgrade-211243\" not found"
	Jul 29 02:17:01 kubernetes-upgrade-211243 kubelet[10553]: W0729 02:17:01.244642   10553 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.63:8443: connect: connection refused
	Jul 29 02:17:01 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:17:01.245102   10553 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.61.63:8443: connect: connection refused" logger="UnhandledError"
	Jul 29 02:17:04 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:17:04.113714   10553 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.61.63:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-211243.17e68d3a1258288f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-211243,UID:kubernetes-upgrade-211243,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node kubernetes-upgrade-211243 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-211243,},FirstTimestamp:2024-07-29 02:13:08.071180431 +0000 UTC m=+0.766342584,LastTimestamp:2024-07-29 02:13:08.071180431 +0000 UTC m=+0.766342584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,Repo
rtingController:kubelet,ReportingInstance:kubernetes-upgrade-211243,}"
	Jul 29 02:17:04 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:17:04.705262   10553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-211243?timeout=10s\": dial tcp 192.168.61.63:8443: connect: connection refused" interval="7s"
	Jul 29 02:17:04 kubernetes-upgrade-211243 kubelet[10553]: I0729 02:17:04.865154   10553 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-211243"
	Jul 29 02:17:04 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:17:04.866113   10553 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.63:8443: connect: connection refused" node="kubernetes-upgrade-211243"
	Jul 29 02:17:05 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:17:05.054560   10553 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-211243_kube-system_0ef06612d41f085267b52ff76d5b3b30_1\" is already in use by 0b240be01763a82196ba0bf64b034d88db3c01ac272b5ce223a1e7c4267b8fa8. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="0109729af39efc257ab64dbb0b5cc2c0fe7d32cc1528402c67f627d9e2a1a499"
	Jul 29 02:17:05 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:17:05.055092   10553 kuberuntime_manager.go:1257] "Unhandled Error" err="container &Container{Name:kube-scheduler,Image:registry.k8s.io/kube-scheduler:v1.31.0-beta.0,Command:[kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=false],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/scheduler.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10259 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCP
Socket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10259 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-scheduler-kubernetes-upgrade-211243_kube-system(0ef06612d41f085267b52ff76d5b3b30): CreateContainerError: the container name \"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-211243_kube-system_0ef06612d41f08
5267b52ff76d5b3b30_1\" is already in use by 0b240be01763a82196ba0bf64b034d88db3c01ac272b5ce223a1e7c4267b8fa8. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Jul 29 02:17:05 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:17:05.056597   10553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"the container name \\\"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-211243_kube-system_0ef06612d41f085267b52ff76d5b3b30_1\\\" is already in use by 0b240be01763a82196ba0bf64b034d88db3c01ac272b5ce223a1e7c4267b8fa8. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-scheduler-kubernetes-upgrade-211243" podUID="0ef06612d41f085267b52ff76d5b3b30"
	Jul 29 02:17:05 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:17:05.057117   10553 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-211243_kube-system_c89c180f99029a822effb572e8ac120e_1\" is already in use by 138489047c336c3434cad71ff8c73d3119ea009fde2f945531fb1b0c9ab57b0f. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="69702473e0bcf3f1acf52cad55972d4edaceae28bbeed908701794e8095b7f82"
	Jul 29 02:17:05 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:17:05.057238   10553 kuberuntime_manager.go:1257] "Unhandled Error" err="container &Container{Name:kube-apiserver,Image:registry.k8s.io/kube-apiserver:v1.31.0-beta.0,Command:[kube-apiserver --advertise-address=192.168.61.63 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/var/lib/minikube/certs/ca.crt --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --enable-bootstrap-token-auth=true --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key --kubelet-p
referred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=8443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/minikube/certs/sa.pub --service-account-signing-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/var/lib/minikube/certs/apiserver.crt --tls-private-key-file=/var/lib/minikube/certs/apiserver.key],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{250 -3} {<nil>} 250m DecimalSI},},Claims:[]Resour
ceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8443 },Host:192.168.61.63,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8443 },Host:192.168.61.63,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutS
econds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8443 },Host:192.168.61.63,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-apiserver-kubernetes-upgrade-211243_kube-system(c89c180f99029a822effb572e8ac120e): CreateContainerError: the container name \"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-211243_kube-system_c89c180f99029a822effb572e8ac120e_1\" is already in use by 138489047c336c3434cad71ff8c7
3d3119ea009fde2f945531fb1b0c9ab57b0f. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Jul 29 02:17:05 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:17:05.058431   10553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"the container name \\\"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-211243_kube-system_c89c180f99029a822effb572e8ac120e_1\\\" is already in use by 138489047c336c3434cad71ff8c73d3119ea009fde2f945531fb1b0c9ab57b0f. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-211243" podUID="c89c180f99029a822effb572e8ac120e"
	Jul 29 02:17:08 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:17:08.053940   10553 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-211243_kube-system_06fa1cf0312db8d3f697d6f826de606d_1\" is already in use by 4dee54c5d41247555c3d6c1d657fd69cff5e6501d08ec33344f477a210602929. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="bdc6a6564ad9578543ee8c81621f8ea303e7d8b83e396b68e614d49358cc48fa"
	Jul 29 02:17:08 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:17:08.054203   10553 kuberuntime_manager.go:1257] "Unhandled Error" err="container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.31.0-beta.0,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=
10.96.0.0/12 --use-service-account-credentials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-ma
nager-kubernetes-upgrade-211243_kube-system(06fa1cf0312db8d3f697d6f826de606d): CreateContainerError: the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-211243_kube-system_06fa1cf0312db8d3f697d6f826de606d_1\" is already in use by 4dee54c5d41247555c3d6c1d657fd69cff5e6501d08ec33344f477a210602929. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Jul 29 02:17:08 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:17:08.055397   10553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-211243_kube-system_06fa1cf0312db8d3f697d6f826de606d_1\\\" is already in use by 4dee54c5d41247555c3d6c1d657fd69cff5e6501d08ec33344f477a210602929. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-211243" podUID="06fa1cf0312db8d3f697d6f826de606d"
	Jul 29 02:17:08 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:17:08.061476   10553 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 02:17:08 kubernetes-upgrade-211243 kubelet[10553]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 02:17:08 kubernetes-upgrade-211243 kubelet[10553]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 02:17:08 kubernetes-upgrade-211243 kubelet[10553]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 02:17:08 kubernetes-upgrade-211243 kubelet[10553]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 02:17:08 kubernetes-upgrade-211243 kubelet[10553]: E0729 02:17:08.111638   10553 eviction_manager.go:283] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"kubernetes-upgrade-211243\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-211243 -n kubernetes-upgrade-211243
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-211243 -n kubernetes-upgrade-211243: exit status 2 (229.52718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-211243" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-211243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-211243
--- FAIL: TestKubernetesUpgrade (1197.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (66.77s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-112077 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-112077 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.792636497s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-112077] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-112077" primary control-plane node in "pause-112077" cluster
	* Updating the running kvm2 "pause-112077" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-112077" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:59:00.286876   58942 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:59:00.287026   58942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:59:00.287085   58942 out.go:304] Setting ErrFile to fd 2...
	I0729 01:59:00.287115   58942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:59:00.287459   58942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:59:00.288163   58942 out.go:298] Setting JSON to false
	I0729 01:59:00.289543   58942 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6086,"bootTime":1722212254,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 01:59:00.289644   58942 start.go:139] virtualization: kvm guest
	I0729 01:59:00.291994   58942 out.go:177] * [pause-112077] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 01:59:00.293592   58942 notify.go:220] Checking for updates...
	I0729 01:59:00.294169   58942 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 01:59:00.295484   58942 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 01:59:00.296720   58942 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:59:00.297937   58942 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:59:00.299194   58942 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 01:59:00.300550   58942 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 01:59:00.302344   58942 config.go:182] Loaded profile config "pause-112077": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:59:00.302819   58942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:59:00.302876   58942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:59:00.323033   58942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34023
	I0729 01:59:00.323454   58942 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:59:00.323972   58942 main.go:141] libmachine: Using API Version  1
	I0729 01:59:00.324005   58942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:59:00.324320   58942 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:59:00.324489   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:00.324716   58942 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 01:59:00.325006   58942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:59:00.325029   58942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:59:00.341817   58942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40979
	I0729 01:59:00.342202   58942 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:59:00.342674   58942 main.go:141] libmachine: Using API Version  1
	I0729 01:59:00.342685   58942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:59:00.342989   58942 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:59:00.343171   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:00.382715   58942 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 01:59:00.384197   58942 start.go:297] selected driver: kvm2
	I0729 01:59:00.384220   58942 start.go:901] validating driver "kvm2" against &{Name:pause-112077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-112077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:59:00.384431   58942 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 01:59:00.384897   58942 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:59:00.384988   58942 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 01:59:00.408322   58942 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 01:59:00.409216   58942 cni.go:84] Creating CNI manager for ""
	I0729 01:59:00.409240   58942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 01:59:00.409322   58942 start.go:340] cluster config:
	{Name:pause-112077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-112077 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:59:00.409517   58942 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:59:00.411349   58942 out.go:177] * Starting "pause-112077" primary control-plane node in "pause-112077" cluster
	I0729 01:59:00.412684   58942 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:59:00.412720   58942 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 01:59:00.412729   58942 cache.go:56] Caching tarball of preloaded images
	I0729 01:59:00.412797   58942 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 01:59:00.412806   58942 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 01:59:00.412909   58942 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/config.json ...
	I0729 01:59:00.413098   58942 start.go:360] acquireMachinesLock for pause-112077: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 01:59:00.413139   58942 start.go:364] duration metric: took 23.342µs to acquireMachinesLock for "pause-112077"
	I0729 01:59:00.413152   58942 start.go:96] Skipping create...Using existing machine configuration
	I0729 01:59:00.413157   58942 fix.go:54] fixHost starting: 
	I0729 01:59:00.413469   58942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:59:00.413510   58942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:59:00.429887   58942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42527
	I0729 01:59:00.430262   58942 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:59:00.430786   58942 main.go:141] libmachine: Using API Version  1
	I0729 01:59:00.430811   58942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:59:00.431125   58942 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:59:00.431299   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:00.431464   58942 main.go:141] libmachine: (pause-112077) Calling .GetState
	I0729 01:59:00.769612   58942 fix.go:112] recreateIfNeeded on pause-112077: state=Running err=<nil>
	W0729 01:59:00.769633   58942 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 01:59:00.771639   58942 out.go:177] * Updating the running kvm2 "pause-112077" VM ...
	I0729 01:59:00.773251   58942 machine.go:94] provisionDockerMachine start ...
	I0729 01:59:00.773293   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:00.773594   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:00.776414   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:00.776849   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:00.776895   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:00.777070   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:00.777277   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:00.777453   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:00.777591   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:00.777734   58942 main.go:141] libmachine: Using SSH client type: native
	I0729 01:59:00.777988   58942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0729 01:59:00.778003   58942 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 01:59:00.900982   58942 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-112077
	
	I0729 01:59:00.901024   58942 main.go:141] libmachine: (pause-112077) Calling .GetMachineName
	I0729 01:59:00.901297   58942 buildroot.go:166] provisioning hostname "pause-112077"
	I0729 01:59:00.901324   58942 main.go:141] libmachine: (pause-112077) Calling .GetMachineName
	I0729 01:59:00.901512   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:00.904914   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:00.905365   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:00.905393   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:00.905621   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:00.905823   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:00.905993   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:00.906158   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:00.906313   58942 main.go:141] libmachine: Using SSH client type: native
	I0729 01:59:00.906526   58942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0729 01:59:00.906545   58942 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-112077 && echo "pause-112077" | sudo tee /etc/hostname
	I0729 01:59:01.049250   58942 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-112077
	
	I0729 01:59:01.049282   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:01.052828   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.053222   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:01.053265   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.053421   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:01.053628   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:01.054012   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:01.054213   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:01.054460   58942 main.go:141] libmachine: Using SSH client type: native
	I0729 01:59:01.054703   58942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0729 01:59:01.054727   58942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-112077' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-112077/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-112077' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 01:59:01.172295   58942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:59:01.172333   58942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 01:59:01.172362   58942 buildroot.go:174] setting up certificates
	I0729 01:59:01.172376   58942 provision.go:84] configureAuth start
	I0729 01:59:01.172391   58942 main.go:141] libmachine: (pause-112077) Calling .GetMachineName
	I0729 01:59:01.172665   58942 main.go:141] libmachine: (pause-112077) Calling .GetIP
	I0729 01:59:01.175394   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.175763   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:01.175800   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.175954   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:01.178094   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.178393   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:01.178426   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.178528   58942 provision.go:143] copyHostCerts
	I0729 01:59:01.178596   58942 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 01:59:01.178613   58942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:59:01.178679   58942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 01:59:01.178782   58942 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 01:59:01.178794   58942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:59:01.178828   58942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 01:59:01.178894   58942 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 01:59:01.178905   58942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:59:01.178932   58942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 01:59:01.178991   58942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.pause-112077 san=[127.0.0.1 192.168.39.22 localhost minikube pause-112077]
	I0729 01:59:01.320795   58942 provision.go:177] copyRemoteCerts
	I0729 01:59:01.320854   58942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 01:59:01.320876   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:01.324209   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.324635   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:01.324698   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.324884   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:01.325071   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:01.325233   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:01.325424   58942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/pause-112077/id_rsa Username:docker}
	I0729 01:59:01.417411   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 01:59:01.451176   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 01:59:01.480705   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0729 01:59:01.509060   58942 provision.go:87] duration metric: took 336.668444ms to configureAuth
	I0729 01:59:01.509086   58942 buildroot.go:189] setting minikube options for container-runtime
	I0729 01:59:01.509468   58942 config.go:182] Loaded profile config "pause-112077": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:59:01.509573   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:01.512733   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.513109   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:01.513138   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.513370   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:01.513602   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:01.513786   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:01.514002   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:01.514189   58942 main.go:141] libmachine: Using SSH client type: native
	I0729 01:59:01.514407   58942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0729 01:59:01.514429   58942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 01:59:07.074488   58942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 01:59:07.074537   58942 machine.go:97] duration metric: took 6.301250035s to provisionDockerMachine
	I0729 01:59:07.074548   58942 start.go:293] postStartSetup for "pause-112077" (driver="kvm2")
	I0729 01:59:07.074558   58942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 01:59:07.074571   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:07.075012   58942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 01:59:07.075043   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:07.078131   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.078524   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:07.078562   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.078713   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:07.078898   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:07.079076   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:07.079216   58942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/pause-112077/id_rsa Username:docker}
	I0729 01:59:07.166421   58942 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 01:59:07.171015   58942 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 01:59:07.171041   58942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 01:59:07.171114   58942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 01:59:07.171203   58942 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 01:59:07.171301   58942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 01:59:07.180907   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:59:07.204797   58942 start.go:296] duration metric: took 130.235759ms for postStartSetup
	I0729 01:59:07.204851   58942 fix.go:56] duration metric: took 6.791691719s for fixHost
	I0729 01:59:07.204875   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:07.207558   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.207928   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:07.207955   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.208138   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:07.208322   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:07.208492   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:07.208600   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:07.208768   58942 main.go:141] libmachine: Using SSH client type: native
	I0729 01:59:07.208941   58942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0729 01:59:07.208950   58942 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 01:59:07.319802   58942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722218347.301546106
	
	I0729 01:59:07.319822   58942 fix.go:216] guest clock: 1722218347.301546106
	I0729 01:59:07.319831   58942 fix.go:229] Guest: 2024-07-29 01:59:07.301546106 +0000 UTC Remote: 2024-07-29 01:59:07.204855132 +0000 UTC m=+6.986045348 (delta=96.690974ms)
	I0729 01:59:07.319870   58942 fix.go:200] guest clock delta is within tolerance: 96.690974ms
	I0729 01:59:07.319876   58942 start.go:83] releasing machines lock for "pause-112077", held for 6.90672832s
	I0729 01:59:07.319909   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:07.320195   58942 main.go:141] libmachine: (pause-112077) Calling .GetIP
	I0729 01:59:07.323127   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.323540   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:07.323565   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.323731   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:07.324310   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:07.324481   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:07.324537   58942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 01:59:07.324573   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:07.324713   58942 ssh_runner.go:195] Run: cat /version.json
	I0729 01:59:07.324743   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:07.327319   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.327520   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.327710   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:07.327728   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.327825   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:07.327850   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.327912   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:07.328077   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:07.328100   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:07.328213   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:07.328406   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:07.328449   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:07.328563   58942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/pause-112077/id_rsa Username:docker}
	I0729 01:59:07.328618   58942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/pause-112077/id_rsa Username:docker}
	I0729 01:59:07.428600   58942 ssh_runner.go:195] Run: systemctl --version
	I0729 01:59:07.435188   58942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 01:59:07.590651   58942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 01:59:07.596996   58942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 01:59:07.597060   58942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 01:59:07.606543   58942 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 01:59:07.606564   58942 start.go:495] detecting cgroup driver to use...
	I0729 01:59:07.606630   58942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 01:59:07.623623   58942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 01:59:07.638061   58942 docker.go:217] disabling cri-docker service (if available) ...
	I0729 01:59:07.638115   58942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 01:59:07.653649   58942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 01:59:07.668951   58942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 01:59:07.810006   58942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 01:59:07.949024   58942 docker.go:233] disabling docker service ...
	I0729 01:59:07.949101   58942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 01:59:07.967184   58942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 01:59:07.981535   58942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 01:59:08.110180   58942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 01:59:08.241015   58942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 01:59:08.256868   58942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 01:59:08.278103   58942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 01:59:08.278162   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.289116   58942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 01:59:08.289174   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.299982   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.311401   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.322358   58942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 01:59:08.333572   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.344638   58942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.356654   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.368734   58942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 01:59:08.379455   58942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 01:59:08.389988   58942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:59:08.536994   58942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 01:59:14.119322   58942 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.582282318s)
	I0729 01:59:14.119361   58942 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 01:59:14.119412   58942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 01:59:14.124545   58942 start.go:563] Will wait 60s for crictl version
	I0729 01:59:14.124605   58942 ssh_runner.go:195] Run: which crictl
	I0729 01:59:14.128484   58942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 01:59:14.168118   58942 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 01:59:14.168203   58942 ssh_runner.go:195] Run: crio --version
	I0729 01:59:14.200449   58942 ssh_runner.go:195] Run: crio --version
	I0729 01:59:14.232801   58942 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 01:59:14.234159   58942 main.go:141] libmachine: (pause-112077) Calling .GetIP
	I0729 01:59:14.237240   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:14.237641   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:14.237665   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:14.237893   58942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 01:59:14.242341   58942 kubeadm.go:883] updating cluster {Name:pause-112077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-112077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 01:59:14.242483   58942 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:59:14.242531   58942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:59:14.287217   58942 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 01:59:14.287245   58942 crio.go:433] Images already preloaded, skipping extraction
	I0729 01:59:14.287300   58942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:59:14.323695   58942 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 01:59:14.323718   58942 cache_images.go:84] Images are preloaded, skipping loading
	I0729 01:59:14.323728   58942 kubeadm.go:934] updating node { 192.168.39.22 8443 v1.30.3 crio true true} ...
	I0729 01:59:14.323855   58942 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-112077 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-112077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 01:59:14.323942   58942 ssh_runner.go:195] Run: crio config
	I0729 01:59:14.374595   58942 cni.go:84] Creating CNI manager for ""
	I0729 01:59:14.374621   58942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 01:59:14.374632   58942 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 01:59:14.374651   58942 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-112077 NodeName:pause-112077 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 01:59:14.374797   58942 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-112077"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 01:59:14.374857   58942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 01:59:14.386277   58942 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 01:59:14.386344   58942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 01:59:14.396825   58942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0729 01:59:14.414514   58942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 01:59:14.432141   58942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 01:59:14.450392   58942 ssh_runner.go:195] Run: grep 192.168.39.22	control-plane.minikube.internal$ /etc/hosts
	I0729 01:59:14.454825   58942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:59:14.594288   58942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:59:14.610451   58942 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077 for IP: 192.168.39.22
	I0729 01:59:14.610477   58942 certs.go:194] generating shared ca certs ...
	I0729 01:59:14.610499   58942 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:59:14.610669   58942 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 01:59:14.610731   58942 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 01:59:14.610744   58942 certs.go:256] generating profile certs ...
	I0729 01:59:14.610857   58942 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/client.key
	I0729 01:59:14.610946   58942 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/apiserver.key.f5507500
	I0729 01:59:14.610981   58942 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/proxy-client.key
	I0729 01:59:14.611118   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 01:59:14.611163   58942 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 01:59:14.611175   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 01:59:14.611200   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 01:59:14.611221   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 01:59:14.611240   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 01:59:14.611283   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:59:14.612635   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 01:59:14.639933   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 01:59:14.671306   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 01:59:14.696529   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 01:59:14.723633   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 01:59:14.750066   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 01:59:14.775261   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 01:59:14.800245   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 01:59:14.824108   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 01:59:14.848134   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 01:59:14.873158   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 01:59:14.935107   58942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 01:59:15.026387   58942 ssh_runner.go:195] Run: openssl version
	I0729 01:59:15.130264   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 01:59:15.195680   58942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 01:59:15.224631   58942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 01:59:15.224712   58942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 01:59:15.281540   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 01:59:15.324983   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 01:59:15.388784   58942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:59:15.397220   58942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:59:15.397287   58942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:59:15.438073   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 01:59:15.472575   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 01:59:15.515620   58942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 01:59:15.542374   58942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 01:59:15.542440   58942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 01:59:15.565637   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 01:59:15.599468   58942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 01:59:15.616932   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 01:59:15.641159   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 01:59:15.654715   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 01:59:15.662431   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 01:59:15.679608   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 01:59:15.716800   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 01:59:15.752446   58942 kubeadm.go:392] StartCluster: {Name:pause-112077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-112077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:59:15.752546   58942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 01:59:15.752616   58942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 01:59:15.841118   58942 cri.go:89] found id: "9d73da1cfbd34155ca352d5d60df41bc58831aa78fffe4950273ff80b41afcc0"
	I0729 01:59:15.841142   58942 cri.go:89] found id: "85a42262e085870cb0271aeb77fa37cf2df79478b07b02a0e202030aa7841d9c"
	I0729 01:59:15.841149   58942 cri.go:89] found id: "8ae9f347b1aff4a099b4dca75aac9fe8fb72e4e291b52ee9c8a7c90949e11a35"
	I0729 01:59:15.841155   58942 cri.go:89] found id: "4385bc3017a2edbbeeb6961df651205eef4a8ada814e2ade19898ef4ec240209"
	I0729 01:59:15.841160   58942 cri.go:89] found id: "ccfb93cf0ded9005433905729425ef28b3eaf3c3b21e5e8b24486dac34ca2cf3"
	I0729 01:59:15.841165   58942 cri.go:89] found id: "e4d9f94c7295602523ac69bb831cc319589d7b7ffb759c0822d74a5f4dd4f111"
	I0729 01:59:15.841169   58942 cri.go:89] found id: "6b220db8847222b6aa66fb6db253b1090864c6f6b39d4af7370baedd227ac46f"
	I0729 01:59:15.841174   58942 cri.go:89] found id: "d6ad9b45cb0b70ae5153758bc6999651e9c4e36cd7a8952d9b5164cab11b0d8e"
	I0729 01:59:15.841179   58942 cri.go:89] found id: "0012d6373d33da8958ee44eb8a5a736bfdebdca4b4f8b302fb57d5c64fb0397e"
	I0729 01:59:15.841188   58942 cri.go:89] found id: "f34eddb15984881aacb430691d7f4d407a4e08e554b8df639f4ecdde23f8c561"
	I0729 01:59:15.841192   58942 cri.go:89] found id: ""
	I0729 01:59:15.841248   58942 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-112077 -n pause-112077
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-112077 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-112077 logs -n 25: (1.920238671s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-703567                | NoKubernetes-703567       | jenkins | v1.33.1 | 29 Jul 24 01:54 UTC | 29 Jul 24 01:55 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-713702             | running-upgrade-713702    | jenkins | v1.33.1 | 29 Jul 24 01:55 UTC | 29 Jul 24 01:57 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-703567                | NoKubernetes-703567       | jenkins | v1.33.1 | 29 Jul 24 01:55 UTC | 29 Jul 24 01:55 UTC |
	| start   | -p NoKubernetes-703567                | NoKubernetes-703567       | jenkins | v1.33.1 | 29 Jul 24 01:55 UTC | 29 Jul 24 01:56 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-137446 ssh cat     | force-systemd-flag-137446 | jenkins | v1.33.1 | 29 Jul 24 01:56 UTC | 29 Jul 24 01:56 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-137446          | force-systemd-flag-137446 | jenkins | v1.33.1 | 29 Jul 24 01:56 UTC | 29 Jul 24 01:56 UTC |
	| start   | -p cert-options-343391                | cert-options-343391       | jenkins | v1.33.1 | 29 Jul 24 01:56 UTC | 29 Jul 24 01:57 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-703567 sudo           | NoKubernetes-703567       | jenkins | v1.33.1 | 29 Jul 24 01:56 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-703567                | NoKubernetes-703567       | jenkins | v1.33.1 | 29 Jul 24 01:56 UTC | 29 Jul 24 01:57 UTC |
	| start   | -p NoKubernetes-703567                | NoKubernetes-703567       | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC | 29 Jul 24 01:57 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-713702             | running-upgrade-713702    | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC | 29 Jul 24 01:57 UTC |
	| start   | -p pause-112077 --memory=2048         | pause-112077              | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC | 29 Jul 24 01:59 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-343391 ssh               | cert-options-343391       | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC | 29 Jul 24 01:57 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-343391 -- sudo        | cert-options-343391       | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC | 29 Jul 24 01:57 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-343391                | cert-options-343391       | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC | 29 Jul 24 01:57 UTC |
	| start   | -p kubernetes-upgrade-211243          | kubernetes-upgrade-211243 | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-703567 sudo           | NoKubernetes-703567       | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-703567                | NoKubernetes-703567       | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC | 29 Jul 24 01:57 UTC |
	| start   | -p stopped-upgrade-804241             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 01:57 UTC | 29 Jul 24 01:59 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p cert-expiration-923851             | cert-expiration-923851    | jenkins | v1.33.1 | 29 Jul 24 01:58 UTC | 29 Jul 24 01:59 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-923851             | cert-expiration-923851    | jenkins | v1.33.1 | 29 Jul 24 01:59 UTC | 29 Jul 24 01:59 UTC |
	| start   | -p pause-112077                       | pause-112077              | jenkins | v1.33.1 | 29 Jul 24 01:59 UTC | 29 Jul 24 02:00 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p auto-464146 --memory=3072          | auto-464146               | jenkins | v1.33.1 | 29 Jul 24 01:59 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-804241 stop           | minikube                  | jenkins | v1.26.0 | 29 Jul 24 01:59 UTC | 29 Jul 24 01:59 UTC |
	| start   | -p stopped-upgrade-804241             | stopped-upgrade-804241    | jenkins | v1.33.1 | 29 Jul 24 01:59 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 01:59:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 01:59:03.858701   59122 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:59:03.858966   59122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:59:03.858975   59122 out.go:304] Setting ErrFile to fd 2...
	I0729 01:59:03.858980   59122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:59:03.859201   59122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:59:03.859778   59122 out.go:298] Setting JSON to false
	I0729 01:59:03.860701   59122 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6090,"bootTime":1722212254,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 01:59:03.860767   59122 start.go:139] virtualization: kvm guest
	I0729 01:59:03.863135   59122 out.go:177] * [stopped-upgrade-804241] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 01:59:03.864626   59122 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 01:59:03.864641   59122 notify.go:220] Checking for updates...
	I0729 01:59:03.867163   59122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 01:59:03.868421   59122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:59:03.869654   59122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:59:03.870866   59122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 01:59:03.872038   59122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 01:59:03.873617   59122 config.go:182] Loaded profile config "stopped-upgrade-804241": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0729 01:59:03.873996   59122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:59:03.874072   59122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:59:03.888916   59122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44669
	I0729 01:59:03.889315   59122 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:59:03.889789   59122 main.go:141] libmachine: Using API Version  1
	I0729 01:59:03.889811   59122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:59:03.890167   59122 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:59:03.890331   59122 main.go:141] libmachine: (stopped-upgrade-804241) Calling .DriverName
	I0729 01:59:03.892303   59122 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 01:59:03.893605   59122 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 01:59:03.893902   59122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:59:03.893942   59122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:59:03.908482   59122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36315
	I0729 01:59:03.908842   59122 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:59:03.909464   59122 main.go:141] libmachine: Using API Version  1
	I0729 01:59:03.909495   59122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:59:03.909883   59122 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:59:03.910099   59122 main.go:141] libmachine: (stopped-upgrade-804241) Calling .DriverName
	I0729 01:59:03.944979   59122 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 01:59:03.946264   59122 start.go:297] selected driver: kvm2
	I0729 01:59:03.946279   59122 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-804241 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-804
241 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.165 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 01:59:03.946404   59122 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 01:59:03.947253   59122 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:59:03.947322   59122 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 01:59:03.962696   59122 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 01:59:03.963052   59122 cni.go:84] Creating CNI manager for ""
	I0729 01:59:03.963091   59122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 01:59:03.963158   59122 start.go:340] cluster config:
	{Name:stopped-upgrade-804241 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-804241 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.165 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 01:59:03.963264   59122 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:59:03.965187   59122 out.go:177] * Starting "stopped-upgrade-804241" primary control-plane node in "stopped-upgrade-804241" cluster
	I0729 01:59:04.223211   57807 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 01:59:04.223845   57807 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 01:59:04.224074   57807 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 01:59:00.773251   58942 machine.go:94] provisionDockerMachine start ...
	I0729 01:59:00.773293   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:00.773594   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:00.776414   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:00.776849   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:00.776895   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:00.777070   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:00.777277   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:00.777453   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:00.777591   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:00.777734   58942 main.go:141] libmachine: Using SSH client type: native
	I0729 01:59:00.777988   58942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0729 01:59:00.778003   58942 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 01:59:00.900982   58942 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-112077
	
	I0729 01:59:00.901024   58942 main.go:141] libmachine: (pause-112077) Calling .GetMachineName
	I0729 01:59:00.901297   58942 buildroot.go:166] provisioning hostname "pause-112077"
	I0729 01:59:00.901324   58942 main.go:141] libmachine: (pause-112077) Calling .GetMachineName
	I0729 01:59:00.901512   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:00.904914   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:00.905365   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:00.905393   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:00.905621   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:00.905823   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:00.905993   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:00.906158   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:00.906313   58942 main.go:141] libmachine: Using SSH client type: native
	I0729 01:59:00.906526   58942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0729 01:59:00.906545   58942 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-112077 && echo "pause-112077" | sudo tee /etc/hostname
	I0729 01:59:01.049250   58942 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-112077
	
	I0729 01:59:01.049282   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:01.052828   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.053222   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:01.053265   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.053421   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:01.053628   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:01.054012   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:01.054213   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:01.054460   58942 main.go:141] libmachine: Using SSH client type: native
	I0729 01:59:01.054703   58942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0729 01:59:01.054727   58942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-112077' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-112077/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-112077' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 01:59:01.172295   58942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:59:01.172333   58942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 01:59:01.172362   58942 buildroot.go:174] setting up certificates
	I0729 01:59:01.172376   58942 provision.go:84] configureAuth start
	I0729 01:59:01.172391   58942 main.go:141] libmachine: (pause-112077) Calling .GetMachineName
	I0729 01:59:01.172665   58942 main.go:141] libmachine: (pause-112077) Calling .GetIP
	I0729 01:59:01.175394   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.175763   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:01.175800   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.175954   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:01.178094   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.178393   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:01.178426   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.178528   58942 provision.go:143] copyHostCerts
	I0729 01:59:01.178596   58942 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 01:59:01.178613   58942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:59:01.178679   58942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 01:59:01.178782   58942 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 01:59:01.178794   58942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:59:01.178828   58942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 01:59:01.178894   58942 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 01:59:01.178905   58942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:59:01.178932   58942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 01:59:01.178991   58942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.pause-112077 san=[127.0.0.1 192.168.39.22 localhost minikube pause-112077]
	I0729 01:59:01.320795   58942 provision.go:177] copyRemoteCerts
	I0729 01:59:01.320854   58942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 01:59:01.320876   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:01.324209   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.324635   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:01.324698   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.324884   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:01.325071   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:01.325233   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:01.325424   58942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/pause-112077/id_rsa Username:docker}
	I0729 01:59:01.417411   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 01:59:01.451176   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 01:59:01.480705   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0729 01:59:01.509060   58942 provision.go:87] duration metric: took 336.668444ms to configureAuth
	I0729 01:59:01.509086   58942 buildroot.go:189] setting minikube options for container-runtime
	I0729 01:59:01.509468   58942 config.go:182] Loaded profile config "pause-112077": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:59:01.509573   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:01.512733   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.513109   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:01.513138   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.513370   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:01.513602   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:01.513786   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:01.514002   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:01.514189   58942 main.go:141] libmachine: Using SSH client type: native
	I0729 01:59:01.514407   58942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0729 01:59:01.514429   58942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 01:59:01.061692   59039 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:59:01.061756   59039 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 01:59:01.061777   59039 cache.go:56] Caching tarball of preloaded images
	I0729 01:59:01.061864   59039 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 01:59:01.061879   59039 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 01:59:01.061998   59039 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/auto-464146/config.json ...
	I0729 01:59:01.062026   59039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/auto-464146/config.json: {Name:mk0dee52ca89978662c54ea73f7ceed742d218d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:59:01.062195   59039 start.go:360] acquireMachinesLock for auto-464146: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 01:59:07.319955   59039 start.go:364] duration metric: took 6.257735754s to acquireMachinesLock for "auto-464146"
	I0729 01:59:07.320022   59039 start.go:93] Provisioning new machine with config: &{Name:auto-464146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:auto-464146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:59:07.320203   59039 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 01:59:03.966334   59122 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0729 01:59:03.966388   59122 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0729 01:59:03.966412   59122 cache.go:56] Caching tarball of preloaded images
	I0729 01:59:03.966518   59122 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 01:59:03.966532   59122 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0729 01:59:03.966655   59122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/stopped-upgrade-804241/config.json ...
	I0729 01:59:03.966929   59122 start.go:360] acquireMachinesLock for stopped-upgrade-804241: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 01:59:09.224611   57807 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 01:59:09.224876   57807 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 01:59:07.074488   58942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 01:59:07.074537   58942 machine.go:97] duration metric: took 6.301250035s to provisionDockerMachine
	I0729 01:59:07.074548   58942 start.go:293] postStartSetup for "pause-112077" (driver="kvm2")
	I0729 01:59:07.074558   58942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 01:59:07.074571   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:07.075012   58942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 01:59:07.075043   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:07.078131   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.078524   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:07.078562   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.078713   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:07.078898   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:07.079076   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:07.079216   58942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/pause-112077/id_rsa Username:docker}
	I0729 01:59:07.166421   58942 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 01:59:07.171015   58942 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 01:59:07.171041   58942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 01:59:07.171114   58942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 01:59:07.171203   58942 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 01:59:07.171301   58942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 01:59:07.180907   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:59:07.204797   58942 start.go:296] duration metric: took 130.235759ms for postStartSetup
	I0729 01:59:07.204851   58942 fix.go:56] duration metric: took 6.791691719s for fixHost
	I0729 01:59:07.204875   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:07.207558   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.207928   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:07.207955   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.208138   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:07.208322   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:07.208492   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:07.208600   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:07.208768   58942 main.go:141] libmachine: Using SSH client type: native
	I0729 01:59:07.208941   58942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0729 01:59:07.208950   58942 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 01:59:07.319802   58942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722218347.301546106
	
	I0729 01:59:07.319822   58942 fix.go:216] guest clock: 1722218347.301546106
	I0729 01:59:07.319831   58942 fix.go:229] Guest: 2024-07-29 01:59:07.301546106 +0000 UTC Remote: 2024-07-29 01:59:07.204855132 +0000 UTC m=+6.986045348 (delta=96.690974ms)
	I0729 01:59:07.319870   58942 fix.go:200] guest clock delta is within tolerance: 96.690974ms
	I0729 01:59:07.319876   58942 start.go:83] releasing machines lock for "pause-112077", held for 6.90672832s
	I0729 01:59:07.319909   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:07.320195   58942 main.go:141] libmachine: (pause-112077) Calling .GetIP
	I0729 01:59:07.323127   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.323540   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:07.323565   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.323731   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:07.324310   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:07.324481   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:07.324537   58942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 01:59:07.324573   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:07.324713   58942 ssh_runner.go:195] Run: cat /version.json
	I0729 01:59:07.324743   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:07.327319   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.327520   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.327710   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:07.327728   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.327825   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:07.327850   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.327912   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:07.328077   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:07.328100   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:07.328213   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:07.328406   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:07.328449   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:07.328563   58942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/pause-112077/id_rsa Username:docker}
	I0729 01:59:07.328618   58942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/pause-112077/id_rsa Username:docker}
	I0729 01:59:07.428600   58942 ssh_runner.go:195] Run: systemctl --version
	I0729 01:59:07.435188   58942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 01:59:07.590651   58942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 01:59:07.596996   58942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 01:59:07.597060   58942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 01:59:07.606543   58942 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 01:59:07.606564   58942 start.go:495] detecting cgroup driver to use...
	I0729 01:59:07.606630   58942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 01:59:07.623623   58942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 01:59:07.638061   58942 docker.go:217] disabling cri-docker service (if available) ...
	I0729 01:59:07.638115   58942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 01:59:07.653649   58942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 01:59:07.668951   58942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 01:59:07.810006   58942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 01:59:07.949024   58942 docker.go:233] disabling docker service ...
	I0729 01:59:07.949101   58942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 01:59:07.967184   58942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 01:59:07.981535   58942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 01:59:08.110180   58942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 01:59:08.241015   58942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 01:59:08.256868   58942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 01:59:08.278103   58942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 01:59:08.278162   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.289116   58942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 01:59:08.289174   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.299982   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.311401   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.322358   58942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 01:59:08.333572   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.344638   58942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.356654   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.368734   58942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 01:59:08.379455   58942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 01:59:08.389988   58942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:59:08.536994   58942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 01:59:07.322135   59039 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 01:59:07.322351   59039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:59:07.322407   59039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:59:07.338207   59039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0729 01:59:07.338630   59039 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:59:07.339181   59039 main.go:141] libmachine: Using API Version  1
	I0729 01:59:07.339200   59039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:59:07.339614   59039 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:59:07.339813   59039 main.go:141] libmachine: (auto-464146) Calling .GetMachineName
	I0729 01:59:07.339965   59039 main.go:141] libmachine: (auto-464146) Calling .DriverName
	I0729 01:59:07.340115   59039 start.go:159] libmachine.API.Create for "auto-464146" (driver="kvm2")
	I0729 01:59:07.340155   59039 client.go:168] LocalClient.Create starting
	I0729 01:59:07.340204   59039 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem
	I0729 01:59:07.340252   59039 main.go:141] libmachine: Decoding PEM data...
	I0729 01:59:07.340276   59039 main.go:141] libmachine: Parsing certificate...
	I0729 01:59:07.340345   59039 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem
	I0729 01:59:07.340373   59039 main.go:141] libmachine: Decoding PEM data...
	I0729 01:59:07.340395   59039 main.go:141] libmachine: Parsing certificate...
	I0729 01:59:07.340423   59039 main.go:141] libmachine: Running pre-create checks...
	I0729 01:59:07.340443   59039 main.go:141] libmachine: (auto-464146) Calling .PreCreateCheck
	I0729 01:59:07.340811   59039 main.go:141] libmachine: (auto-464146) Calling .GetConfigRaw
	I0729 01:59:07.341282   59039 main.go:141] libmachine: Creating machine...
	I0729 01:59:07.341300   59039 main.go:141] libmachine: (auto-464146) Calling .Create
	I0729 01:59:07.341450   59039 main.go:141] libmachine: (auto-464146) Creating KVM machine...
	I0729 01:59:07.342674   59039 main.go:141] libmachine: (auto-464146) DBG | found existing default KVM network
	I0729 01:59:07.343988   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:07.343807   59157 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:61:72:15} reservation:<nil>}
	I0729 01:59:07.345074   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:07.344992   59157 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015f50}
	I0729 01:59:07.345144   59039 main.go:141] libmachine: (auto-464146) DBG | created network xml: 
	I0729 01:59:07.345162   59039 main.go:141] libmachine: (auto-464146) DBG | <network>
	I0729 01:59:07.345169   59039 main.go:141] libmachine: (auto-464146) DBG |   <name>mk-auto-464146</name>
	I0729 01:59:07.345180   59039 main.go:141] libmachine: (auto-464146) DBG |   <dns enable='no'/>
	I0729 01:59:07.345191   59039 main.go:141] libmachine: (auto-464146) DBG |   
	I0729 01:59:07.345201   59039 main.go:141] libmachine: (auto-464146) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0729 01:59:07.345214   59039 main.go:141] libmachine: (auto-464146) DBG |     <dhcp>
	I0729 01:59:07.345228   59039 main.go:141] libmachine: (auto-464146) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0729 01:59:07.345251   59039 main.go:141] libmachine: (auto-464146) DBG |     </dhcp>
	I0729 01:59:07.345265   59039 main.go:141] libmachine: (auto-464146) DBG |   </ip>
	I0729 01:59:07.345274   59039 main.go:141] libmachine: (auto-464146) DBG |   
	I0729 01:59:07.345285   59039 main.go:141] libmachine: (auto-464146) DBG | </network>
	I0729 01:59:07.345297   59039 main.go:141] libmachine: (auto-464146) DBG | 
	I0729 01:59:07.351136   59039 main.go:141] libmachine: (auto-464146) DBG | trying to create private KVM network mk-auto-464146 192.168.50.0/24...
	I0729 01:59:07.422943   59039 main.go:141] libmachine: (auto-464146) DBG | private KVM network mk-auto-464146 192.168.50.0/24 created
	I0729 01:59:07.422978   59039 main.go:141] libmachine: (auto-464146) Setting up store path in /home/jenkins/minikube-integration/19312-9421/.minikube/machines/auto-464146 ...
	I0729 01:59:07.422990   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:07.422907   59157 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:59:07.423002   59039 main.go:141] libmachine: (auto-464146) Building disk image from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 01:59:07.423085   59039 main.go:141] libmachine: (auto-464146) Downloading /home/jenkins/minikube-integration/19312-9421/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 01:59:07.668235   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:07.668115   59157 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/auto-464146/id_rsa...
	I0729 01:59:07.798049   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:07.797891   59157 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/auto-464146/auto-464146.rawdisk...
	I0729 01:59:07.798080   59039 main.go:141] libmachine: (auto-464146) DBG | Writing magic tar header
	I0729 01:59:07.798094   59039 main.go:141] libmachine: (auto-464146) DBG | Writing SSH key tar header
	I0729 01:59:07.798106   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:07.798002   59157 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/auto-464146 ...
	I0729 01:59:07.798120   59039 main.go:141] libmachine: (auto-464146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/auto-464146
	I0729 01:59:07.798149   59039 main.go:141] libmachine: (auto-464146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines
	I0729 01:59:07.798160   59039 main.go:141] libmachine: (auto-464146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:59:07.798180   59039 main.go:141] libmachine: (auto-464146) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/auto-464146 (perms=drwx------)
	I0729 01:59:07.798194   59039 main.go:141] libmachine: (auto-464146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421
	I0729 01:59:07.798206   59039 main.go:141] libmachine: (auto-464146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 01:59:07.798220   59039 main.go:141] libmachine: (auto-464146) DBG | Checking permissions on dir: /home/jenkins
	I0729 01:59:07.798233   59039 main.go:141] libmachine: (auto-464146) DBG | Checking permissions on dir: /home
	I0729 01:59:07.798240   59039 main.go:141] libmachine: (auto-464146) DBG | Skipping /home - not owner
	I0729 01:59:07.798285   59039 main.go:141] libmachine: (auto-464146) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines (perms=drwxr-xr-x)
	I0729 01:59:07.798316   59039 main.go:141] libmachine: (auto-464146) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube (perms=drwxr-xr-x)
	I0729 01:59:07.798328   59039 main.go:141] libmachine: (auto-464146) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421 (perms=drwxrwxr-x)
	I0729 01:59:07.798342   59039 main.go:141] libmachine: (auto-464146) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 01:59:07.798352   59039 main.go:141] libmachine: (auto-464146) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 01:59:07.798365   59039 main.go:141] libmachine: (auto-464146) Creating domain...
	I0729 01:59:07.799596   59039 main.go:141] libmachine: (auto-464146) define libvirt domain using xml: 
	I0729 01:59:07.799622   59039 main.go:141] libmachine: (auto-464146) <domain type='kvm'>
	I0729 01:59:07.799643   59039 main.go:141] libmachine: (auto-464146)   <name>auto-464146</name>
	I0729 01:59:07.799657   59039 main.go:141] libmachine: (auto-464146)   <memory unit='MiB'>3072</memory>
	I0729 01:59:07.799683   59039 main.go:141] libmachine: (auto-464146)   <vcpu>2</vcpu>
	I0729 01:59:07.799700   59039 main.go:141] libmachine: (auto-464146)   <features>
	I0729 01:59:07.799706   59039 main.go:141] libmachine: (auto-464146)     <acpi/>
	I0729 01:59:07.799714   59039 main.go:141] libmachine: (auto-464146)     <apic/>
	I0729 01:59:07.799747   59039 main.go:141] libmachine: (auto-464146)     <pae/>
	I0729 01:59:07.799767   59039 main.go:141] libmachine: (auto-464146)     
	I0729 01:59:07.799778   59039 main.go:141] libmachine: (auto-464146)   </features>
	I0729 01:59:07.799798   59039 main.go:141] libmachine: (auto-464146)   <cpu mode='host-passthrough'>
	I0729 01:59:07.799806   59039 main.go:141] libmachine: (auto-464146)   
	I0729 01:59:07.799814   59039 main.go:141] libmachine: (auto-464146)   </cpu>
	I0729 01:59:07.799821   59039 main.go:141] libmachine: (auto-464146)   <os>
	I0729 01:59:07.799827   59039 main.go:141] libmachine: (auto-464146)     <type>hvm</type>
	I0729 01:59:07.799834   59039 main.go:141] libmachine: (auto-464146)     <boot dev='cdrom'/>
	I0729 01:59:07.799848   59039 main.go:141] libmachine: (auto-464146)     <boot dev='hd'/>
	I0729 01:59:07.799860   59039 main.go:141] libmachine: (auto-464146)     <bootmenu enable='no'/>
	I0729 01:59:07.799869   59039 main.go:141] libmachine: (auto-464146)   </os>
	I0729 01:59:07.799880   59039 main.go:141] libmachine: (auto-464146)   <devices>
	I0729 01:59:07.799890   59039 main.go:141] libmachine: (auto-464146)     <disk type='file' device='cdrom'>
	I0729 01:59:07.799903   59039 main.go:141] libmachine: (auto-464146)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/auto-464146/boot2docker.iso'/>
	I0729 01:59:07.799917   59039 main.go:141] libmachine: (auto-464146)       <target dev='hdc' bus='scsi'/>
	I0729 01:59:07.799929   59039 main.go:141] libmachine: (auto-464146)       <readonly/>
	I0729 01:59:07.799951   59039 main.go:141] libmachine: (auto-464146)     </disk>
	I0729 01:59:07.799965   59039 main.go:141] libmachine: (auto-464146)     <disk type='file' device='disk'>
	I0729 01:59:07.799977   59039 main.go:141] libmachine: (auto-464146)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 01:59:07.799992   59039 main.go:141] libmachine: (auto-464146)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/auto-464146/auto-464146.rawdisk'/>
	I0729 01:59:07.800006   59039 main.go:141] libmachine: (auto-464146)       <target dev='hda' bus='virtio'/>
	I0729 01:59:07.800017   59039 main.go:141] libmachine: (auto-464146)     </disk>
	I0729 01:59:07.800025   59039 main.go:141] libmachine: (auto-464146)     <interface type='network'>
	I0729 01:59:07.800037   59039 main.go:141] libmachine: (auto-464146)       <source network='mk-auto-464146'/>
	I0729 01:59:07.800047   59039 main.go:141] libmachine: (auto-464146)       <model type='virtio'/>
	I0729 01:59:07.800054   59039 main.go:141] libmachine: (auto-464146)     </interface>
	I0729 01:59:07.800065   59039 main.go:141] libmachine: (auto-464146)     <interface type='network'>
	I0729 01:59:07.800082   59039 main.go:141] libmachine: (auto-464146)       <source network='default'/>
	I0729 01:59:07.800097   59039 main.go:141] libmachine: (auto-464146)       <model type='virtio'/>
	I0729 01:59:07.800107   59039 main.go:141] libmachine: (auto-464146)     </interface>
	I0729 01:59:07.800113   59039 main.go:141] libmachine: (auto-464146)     <serial type='pty'>
	I0729 01:59:07.800121   59039 main.go:141] libmachine: (auto-464146)       <target port='0'/>
	I0729 01:59:07.800131   59039 main.go:141] libmachine: (auto-464146)     </serial>
	I0729 01:59:07.800139   59039 main.go:141] libmachine: (auto-464146)     <console type='pty'>
	I0729 01:59:07.800150   59039 main.go:141] libmachine: (auto-464146)       <target type='serial' port='0'/>
	I0729 01:59:07.800163   59039 main.go:141] libmachine: (auto-464146)     </console>
	I0729 01:59:07.800173   59039 main.go:141] libmachine: (auto-464146)     <rng model='virtio'>
	I0729 01:59:07.800205   59039 main.go:141] libmachine: (auto-464146)       <backend model='random'>/dev/random</backend>
	I0729 01:59:07.800228   59039 main.go:141] libmachine: (auto-464146)     </rng>
	I0729 01:59:07.800254   59039 main.go:141] libmachine: (auto-464146)     
	I0729 01:59:07.800265   59039 main.go:141] libmachine: (auto-464146)     
	I0729 01:59:07.800283   59039 main.go:141] libmachine: (auto-464146)   </devices>
	I0729 01:59:07.800298   59039 main.go:141] libmachine: (auto-464146) </domain>
	I0729 01:59:07.800312   59039 main.go:141] libmachine: (auto-464146) 
	I0729 01:59:07.804552   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:9e:15:00 in network default
	I0729 01:59:07.805125   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:07.805165   59039 main.go:141] libmachine: (auto-464146) Ensuring networks are active...
	I0729 01:59:07.805812   59039 main.go:141] libmachine: (auto-464146) Ensuring network default is active
	I0729 01:59:07.806144   59039 main.go:141] libmachine: (auto-464146) Ensuring network mk-auto-464146 is active
	I0729 01:59:07.806653   59039 main.go:141] libmachine: (auto-464146) Getting domain xml...
	I0729 01:59:07.807397   59039 main.go:141] libmachine: (auto-464146) Creating domain...
	I0729 01:59:09.018191   59039 main.go:141] libmachine: (auto-464146) Waiting to get IP...
	I0729 01:59:09.018960   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:09.019417   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:09.019461   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:09.019416   59157 retry.go:31] will retry after 194.8096ms: waiting for machine to come up
	I0729 01:59:09.215873   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:09.216366   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:09.216396   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:09.216322   59157 retry.go:31] will retry after 242.431083ms: waiting for machine to come up
	I0729 01:59:09.461023   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:09.461558   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:09.461590   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:09.461473   59157 retry.go:31] will retry after 416.34467ms: waiting for machine to come up
	I0729 01:59:09.879015   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:09.879611   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:09.879636   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:09.879568   59157 retry.go:31] will retry after 555.162173ms: waiting for machine to come up
	I0729 01:59:10.436035   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:10.436519   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:10.436549   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:10.436465   59157 retry.go:31] will retry after 499.35339ms: waiting for machine to come up
	I0729 01:59:10.937167   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:10.937668   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:10.937690   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:10.937631   59157 retry.go:31] will retry after 802.525274ms: waiting for machine to come up
	I0729 01:59:14.119322   58942 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.582282318s)
	I0729 01:59:14.119361   58942 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 01:59:14.119412   58942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 01:59:14.124545   58942 start.go:563] Will wait 60s for crictl version
	I0729 01:59:14.124605   58942 ssh_runner.go:195] Run: which crictl
	I0729 01:59:14.128484   58942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 01:59:14.168118   58942 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 01:59:14.168203   58942 ssh_runner.go:195] Run: crio --version
	I0729 01:59:14.200449   58942 ssh_runner.go:195] Run: crio --version
	I0729 01:59:14.232801   58942 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 01:59:14.234159   58942 main.go:141] libmachine: (pause-112077) Calling .GetIP
	I0729 01:59:14.237240   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:14.237641   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:14.237665   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:14.237893   58942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 01:59:14.242341   58942 kubeadm.go:883] updating cluster {Name:pause-112077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-112077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 01:59:14.242483   58942 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:59:14.242531   58942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:59:14.287217   58942 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 01:59:14.287245   58942 crio.go:433] Images already preloaded, skipping extraction
	I0729 01:59:14.287300   58942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:59:14.323695   58942 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 01:59:14.323718   58942 cache_images.go:84] Images are preloaded, skipping loading
	I0729 01:59:14.323728   58942 kubeadm.go:934] updating node { 192.168.39.22 8443 v1.30.3 crio true true} ...
	I0729 01:59:14.323855   58942 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-112077 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-112077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 01:59:14.323942   58942 ssh_runner.go:195] Run: crio config
	I0729 01:59:14.374595   58942 cni.go:84] Creating CNI manager for ""
	I0729 01:59:14.374621   58942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 01:59:14.374632   58942 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 01:59:14.374651   58942 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-112077 NodeName:pause-112077 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 01:59:14.374797   58942 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-112077"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 01:59:14.374857   58942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 01:59:14.386277   58942 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 01:59:14.386344   58942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 01:59:14.396825   58942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0729 01:59:14.414514   58942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 01:59:14.432141   58942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 01:59:14.450392   58942 ssh_runner.go:195] Run: grep 192.168.39.22	control-plane.minikube.internal$ /etc/hosts
	I0729 01:59:14.454825   58942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:59:14.594288   58942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:59:14.610451   58942 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077 for IP: 192.168.39.22
	I0729 01:59:14.610477   58942 certs.go:194] generating shared ca certs ...
	I0729 01:59:14.610499   58942 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:59:14.610669   58942 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 01:59:14.610731   58942 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 01:59:14.610744   58942 certs.go:256] generating profile certs ...
	I0729 01:59:14.610857   58942 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/client.key
	I0729 01:59:14.610946   58942 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/apiserver.key.f5507500
	I0729 01:59:14.610981   58942 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/proxy-client.key
	I0729 01:59:14.611118   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 01:59:14.611163   58942 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 01:59:14.611175   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 01:59:14.611200   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 01:59:14.611221   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 01:59:14.611240   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 01:59:14.611283   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:59:14.612635   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 01:59:14.639933   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 01:59:14.671306   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 01:59:14.696529   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 01:59:14.723633   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 01:59:14.750066   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 01:59:14.775261   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 01:59:14.800245   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 01:59:14.824108   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 01:59:14.848134   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 01:59:14.873158   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 01:59:14.935107   58942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 01:59:15.026387   58942 ssh_runner.go:195] Run: openssl version
	I0729 01:59:15.130264   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 01:59:15.195680   58942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 01:59:15.224631   58942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 01:59:15.224712   58942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 01:59:15.281540   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 01:59:11.741524   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:11.742003   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:11.742033   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:11.741954   59157 retry.go:31] will retry after 1.01251303s: waiting for machine to come up
	I0729 01:59:12.756519   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:12.757011   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:12.757062   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:12.756950   59157 retry.go:31] will retry after 1.161433115s: waiting for machine to come up
	I0729 01:59:13.920033   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:13.920500   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:13.920530   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:13.920455   59157 retry.go:31] will retry after 1.356984409s: waiting for machine to come up
	I0729 01:59:15.278624   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:15.279068   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:15.279096   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:15.278999   59157 retry.go:31] will retry after 1.811064228s: waiting for machine to come up
	I0729 01:59:19.224073   57807 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 01:59:19.224312   57807 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 01:59:15.324983   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 01:59:15.388784   58942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:59:15.397220   58942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:59:15.397287   58942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:59:15.438073   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 01:59:15.472575   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 01:59:15.515620   58942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 01:59:15.542374   58942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 01:59:15.542440   58942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 01:59:15.565637   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 01:59:15.599468   58942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 01:59:15.616932   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 01:59:15.641159   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 01:59:15.654715   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 01:59:15.662431   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 01:59:15.679608   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 01:59:15.716800   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 01:59:15.752446   58942 kubeadm.go:392] StartCluster: {Name:pause-112077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-112077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:59:15.752546   58942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 01:59:15.752616   58942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 01:59:15.841118   58942 cri.go:89] found id: "9d73da1cfbd34155ca352d5d60df41bc58831aa78fffe4950273ff80b41afcc0"
	I0729 01:59:15.841142   58942 cri.go:89] found id: "85a42262e085870cb0271aeb77fa37cf2df79478b07b02a0e202030aa7841d9c"
	I0729 01:59:15.841149   58942 cri.go:89] found id: "8ae9f347b1aff4a099b4dca75aac9fe8fb72e4e291b52ee9c8a7c90949e11a35"
	I0729 01:59:15.841155   58942 cri.go:89] found id: "4385bc3017a2edbbeeb6961df651205eef4a8ada814e2ade19898ef4ec240209"
	I0729 01:59:15.841160   58942 cri.go:89] found id: "ccfb93cf0ded9005433905729425ef28b3eaf3c3b21e5e8b24486dac34ca2cf3"
	I0729 01:59:15.841165   58942 cri.go:89] found id: "e4d9f94c7295602523ac69bb831cc319589d7b7ffb759c0822d74a5f4dd4f111"
	I0729 01:59:15.841169   58942 cri.go:89] found id: "6b220db8847222b6aa66fb6db253b1090864c6f6b39d4af7370baedd227ac46f"
	I0729 01:59:15.841174   58942 cri.go:89] found id: "d6ad9b45cb0b70ae5153758bc6999651e9c4e36cd7a8952d9b5164cab11b0d8e"
	I0729 01:59:15.841179   58942 cri.go:89] found id: "0012d6373d33da8958ee44eb8a5a736bfdebdca4b4f8b302fb57d5c64fb0397e"
	I0729 01:59:15.841188   58942 cri.go:89] found id: "f34eddb15984881aacb430691d7f4d407a4e08e554b8df639f4ecdde23f8c561"
	I0729 01:59:15.841192   58942 cri.go:89] found id: ""
	I0729 01:59:15.841248   58942 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.801769847Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722218402801746263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39e7f07e-6886-4e33-b485-19517effa913 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.802323322Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dce20d77-49e0-4c1b-b710-6dd0ffdfd22b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.802377034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dce20d77-49e0-4c1b-b710-6dd0ffdfd22b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.802859031Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7cb97813bee570d4515e57d507a3c09e62757876fe536363a3f14b3262d8f568,PodSandboxId:7155380eee197110aa1080df536f0fed89f4cf1b72deb6c83c4981330cef6feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722218378900906750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e233f10ca65887d6f7104393588a521b,},Annotations:map[string]string{io.kubernetes.container.hash: d0866b8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee3b6f7935d7110fe5162cfb681e14f2b52f10c0f0df7e11ed5863c710e7424,PodSandboxId:9625bb8069e785ba3105802d07ca42e2b4db572b8bb4d497c3bbc517afdb82e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722218378876703189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee17b01eb59302a56a478e8f065fe54,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a7c3c5f8ebd391af7f184e6fb1a61f0346608de8c1d8c83a4621443130db53,PodSandboxId:f6fd9e5342ad93d73672d00fbd514d20338707b975f83aa6e6abe70ac662e382,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722218378851412305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0214a0633ffd83097e82bb4653d76e15,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b681f3330bed484ab256fb5cdb2d69276102895fc24c7984cd8298d3e48d63,PodSandboxId:b7c26063b3953be16b16695c3eebdf9f9914dfdffa3d889f80f29e974c5889d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722218376776796057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d466c7c5637c35513d90103d11d837ec,},Annotations:map[string]string{io.kubernetes.container.hash: 82629e81,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6577660f08ced09a77d6f9c12a6fe589132ad5be89e697d6db7f740a6c16e4,PodSandboxId:44ffcdb427f6e20d7ea417bae9607e5255b28f3bf9678baebbcd2bc004f4ce28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722218371776491915,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6zq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdb95840,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a65d7355efd21be160e5a3dfca198433a37d8ef776a86984538859be542035,PodSandboxId:8eb5c1b53ab2616e65a12dbdc859e6e5346ead662b1e38bf340e823f1b1389c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722218356143375139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2krfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 709db69f-5c21-49dd-b30d-3395f0043e30,},Annotations:map[string]string{io.kubernetes.container.hash: e59a1d27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b922622ee312381369aaa446b5c7db776cc251adbcb6b34e565512671f9e50,PodSandboxId:44ffcdb427f6e20d7ea417bae9607e5255b28f3bf9678baebbcd2bc004f4ce28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722218355418910612,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6zq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdb958
40,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d73da1cfbd34155ca352d5d60df41bc58831aa78fffe4950273ff80b41afcc0,PodSandboxId:b7c26063b3953be16b16695c3eebdf9f9914dfdffa3d889f80f29e974c5889d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722218355412355999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d466c7c5637c35513d90103d11d837ec,},Annotations:map[string]string{io.kubernetes.container.hash: 82629e81,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a42262e085870cb0271aeb77fa37cf2df79478b07b02a0e202030aa7841d9c,PodSandboxId:9625bb8069e785ba3105802d07ca42e2b4db572b8bb4d497c3bbc517afdb82e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722218355350158413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee17b01eb59302a56a478e8f065fe54,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae9f347b1aff4a099b4dca75aac9fe8fb72e4e291b52ee9c8a7c90949e11a35,PodSandboxId:f6fd9e5342ad93d73672d00fbd514d20338707b975f83aa6e6abe70ac662e382,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722218355267881686,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0214a0633ffd83097e82bb4653d76e15,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4385bc3017a2edbbeeb6961df651205eef4a8ada814e2ade19898ef4ec240209,PodSandboxId:7155380eee197110aa1080df536f0fed89f4cf1b72deb6c83c4981330cef6feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722218355241991658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e233f10ca65887d6f7104393588a521b,},Annotations:map[string]string{io.kubernetes.container.hash: d0866b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccfb93cf0ded9005433905729425ef28b3eaf3c3b21e5e8b24486dac34ca2cf3,PodSandboxId:d9b1baebff90b336501c86c7f696de67dd1b64fdde9ef4c020e00cc97edc3d02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722218300133224421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2krfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 709db69f-5c21-49dd-b30d-3395f0043e30,},Annotations:map[string]string{io.kubernetes.container.hash: e59a1d27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dce20d77-49e0-4c1b-b710-6dd0ffdfd22b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.854516871Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f83f818d-541d-47aa-8133-4af8b413da89 name=/runtime.v1.RuntimeService/Version
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.854618732Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f83f818d-541d-47aa-8133-4af8b413da89 name=/runtime.v1.RuntimeService/Version
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.856510935Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f117b2a-fb5f-4765-9337-a6c413444d3e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.857392446Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722218402857361963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f117b2a-fb5f-4765-9337-a6c413444d3e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.858358733Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a69e117d-1e37-4984-94d8-d6378a3d49a7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.858420200Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a69e117d-1e37-4984-94d8-d6378a3d49a7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.858730386Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7cb97813bee570d4515e57d507a3c09e62757876fe536363a3f14b3262d8f568,PodSandboxId:7155380eee197110aa1080df536f0fed89f4cf1b72deb6c83c4981330cef6feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722218378900906750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e233f10ca65887d6f7104393588a521b,},Annotations:map[string]string{io.kubernetes.container.hash: d0866b8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee3b6f7935d7110fe5162cfb681e14f2b52f10c0f0df7e11ed5863c710e7424,PodSandboxId:9625bb8069e785ba3105802d07ca42e2b4db572b8bb4d497c3bbc517afdb82e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722218378876703189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee17b01eb59302a56a478e8f065fe54,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a7c3c5f8ebd391af7f184e6fb1a61f0346608de8c1d8c83a4621443130db53,PodSandboxId:f6fd9e5342ad93d73672d00fbd514d20338707b975f83aa6e6abe70ac662e382,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722218378851412305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0214a0633ffd83097e82bb4653d76e15,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b681f3330bed484ab256fb5cdb2d69276102895fc24c7984cd8298d3e48d63,PodSandboxId:b7c26063b3953be16b16695c3eebdf9f9914dfdffa3d889f80f29e974c5889d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722218376776796057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d466c7c5637c35513d90103d11d837ec,},Annotations:map[string]string{io.kubernetes.container.hash: 82629e81,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6577660f08ced09a77d6f9c12a6fe589132ad5be89e697d6db7f740a6c16e4,PodSandboxId:44ffcdb427f6e20d7ea417bae9607e5255b28f3bf9678baebbcd2bc004f4ce28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722218371776491915,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6zq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdb95840,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a65d7355efd21be160e5a3dfca198433a37d8ef776a86984538859be542035,PodSandboxId:8eb5c1b53ab2616e65a12dbdc859e6e5346ead662b1e38bf340e823f1b1389c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722218356143375139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2krfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 709db69f-5c21-49dd-b30d-3395f0043e30,},Annotations:map[string]string{io.kubernetes.container.hash: e59a1d27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b922622ee312381369aaa446b5c7db776cc251adbcb6b34e565512671f9e50,PodSandboxId:44ffcdb427f6e20d7ea417bae9607e5255b28f3bf9678baebbcd2bc004f4ce28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722218355418910612,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6zq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdb958
40,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d73da1cfbd34155ca352d5d60df41bc58831aa78fffe4950273ff80b41afcc0,PodSandboxId:b7c26063b3953be16b16695c3eebdf9f9914dfdffa3d889f80f29e974c5889d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722218355412355999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d466c7c5637c35513d90103d11d837ec,},Annotations:map[string]string{io.kubernetes.container.hash: 82629e81,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a42262e085870cb0271aeb77fa37cf2df79478b07b02a0e202030aa7841d9c,PodSandboxId:9625bb8069e785ba3105802d07ca42e2b4db572b8bb4d497c3bbc517afdb82e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722218355350158413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee17b01eb59302a56a478e8f065fe54,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae9f347b1aff4a099b4dca75aac9fe8fb72e4e291b52ee9c8a7c90949e11a35,PodSandboxId:f6fd9e5342ad93d73672d00fbd514d20338707b975f83aa6e6abe70ac662e382,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722218355267881686,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0214a0633ffd83097e82bb4653d76e15,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4385bc3017a2edbbeeb6961df651205eef4a8ada814e2ade19898ef4ec240209,PodSandboxId:7155380eee197110aa1080df536f0fed89f4cf1b72deb6c83c4981330cef6feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722218355241991658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e233f10ca65887d6f7104393588a521b,},Annotations:map[string]string{io.kubernetes.container.hash: d0866b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccfb93cf0ded9005433905729425ef28b3eaf3c3b21e5e8b24486dac34ca2cf3,PodSandboxId:d9b1baebff90b336501c86c7f696de67dd1b64fdde9ef4c020e00cc97edc3d02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722218300133224421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2krfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 709db69f-5c21-49dd-b30d-3395f0043e30,},Annotations:map[string]string{io.kubernetes.container.hash: e59a1d27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a69e117d-1e37-4984-94d8-d6378a3d49a7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.907108682Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7abcd3a5-5c7b-46c9-949a-7347d02ffeb9 name=/runtime.v1.RuntimeService/Version
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.907212844Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7abcd3a5-5c7b-46c9-949a-7347d02ffeb9 name=/runtime.v1.RuntimeService/Version
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.908573166Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab9feecc-4785-4f8a-9344-b565e7c3f4fc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.909168848Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722218402909135098,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab9feecc-4785-4f8a-9344-b565e7c3f4fc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.909684131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d430ea9-aae4-43f9-b952-3de7008c7b3a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.909751755Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d430ea9-aae4-43f9-b952-3de7008c7b3a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.910346609Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7cb97813bee570d4515e57d507a3c09e62757876fe536363a3f14b3262d8f568,PodSandboxId:7155380eee197110aa1080df536f0fed89f4cf1b72deb6c83c4981330cef6feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722218378900906750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e233f10ca65887d6f7104393588a521b,},Annotations:map[string]string{io.kubernetes.container.hash: d0866b8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee3b6f7935d7110fe5162cfb681e14f2b52f10c0f0df7e11ed5863c710e7424,PodSandboxId:9625bb8069e785ba3105802d07ca42e2b4db572b8bb4d497c3bbc517afdb82e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722218378876703189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee17b01eb59302a56a478e8f065fe54,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a7c3c5f8ebd391af7f184e6fb1a61f0346608de8c1d8c83a4621443130db53,PodSandboxId:f6fd9e5342ad93d73672d00fbd514d20338707b975f83aa6e6abe70ac662e382,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722218378851412305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0214a0633ffd83097e82bb4653d76e15,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b681f3330bed484ab256fb5cdb2d69276102895fc24c7984cd8298d3e48d63,PodSandboxId:b7c26063b3953be16b16695c3eebdf9f9914dfdffa3d889f80f29e974c5889d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722218376776796057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d466c7c5637c35513d90103d11d837ec,},Annotations:map[string]string{io.kubernetes.container.hash: 82629e81,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6577660f08ced09a77d6f9c12a6fe589132ad5be89e697d6db7f740a6c16e4,PodSandboxId:44ffcdb427f6e20d7ea417bae9607e5255b28f3bf9678baebbcd2bc004f4ce28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722218371776491915,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6zq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdb95840,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a65d7355efd21be160e5a3dfca198433a37d8ef776a86984538859be542035,PodSandboxId:8eb5c1b53ab2616e65a12dbdc859e6e5346ead662b1e38bf340e823f1b1389c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722218356143375139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2krfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 709db69f-5c21-49dd-b30d-3395f0043e30,},Annotations:map[string]string{io.kubernetes.container.hash: e59a1d27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b922622ee312381369aaa446b5c7db776cc251adbcb6b34e565512671f9e50,PodSandboxId:44ffcdb427f6e20d7ea417bae9607e5255b28f3bf9678baebbcd2bc004f4ce28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722218355418910612,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6zq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdb958
40,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d73da1cfbd34155ca352d5d60df41bc58831aa78fffe4950273ff80b41afcc0,PodSandboxId:b7c26063b3953be16b16695c3eebdf9f9914dfdffa3d889f80f29e974c5889d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722218355412355999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d466c7c5637c35513d90103d11d837ec,},Annotations:map[string]string{io.kubernetes.container.hash: 82629e81,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a42262e085870cb0271aeb77fa37cf2df79478b07b02a0e202030aa7841d9c,PodSandboxId:9625bb8069e785ba3105802d07ca42e2b4db572b8bb4d497c3bbc517afdb82e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722218355350158413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee17b01eb59302a56a478e8f065fe54,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae9f347b1aff4a099b4dca75aac9fe8fb72e4e291b52ee9c8a7c90949e11a35,PodSandboxId:f6fd9e5342ad93d73672d00fbd514d20338707b975f83aa6e6abe70ac662e382,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722218355267881686,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0214a0633ffd83097e82bb4653d76e15,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4385bc3017a2edbbeeb6961df651205eef4a8ada814e2ade19898ef4ec240209,PodSandboxId:7155380eee197110aa1080df536f0fed89f4cf1b72deb6c83c4981330cef6feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722218355241991658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e233f10ca65887d6f7104393588a521b,},Annotations:map[string]string{io.kubernetes.container.hash: d0866b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccfb93cf0ded9005433905729425ef28b3eaf3c3b21e5e8b24486dac34ca2cf3,PodSandboxId:d9b1baebff90b336501c86c7f696de67dd1b64fdde9ef4c020e00cc97edc3d02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722218300133224421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2krfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 709db69f-5c21-49dd-b30d-3395f0043e30,},Annotations:map[string]string{io.kubernetes.container.hash: e59a1d27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d430ea9-aae4-43f9-b952-3de7008c7b3a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.958805308Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c34bab23-4dc5-4acc-b180-684a09c26e1a name=/runtime.v1.RuntimeService/Version
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.958964072Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c34bab23-4dc5-4acc-b180-684a09c26e1a name=/runtime.v1.RuntimeService/Version
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.960061332Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6442f5a8-0ea7-4319-a47b-f01008c6b495 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.960444935Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722218402960423084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6442f5a8-0ea7-4319-a47b-f01008c6b495 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.961077150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b7b4039-ceca-4352-a62f-68a509910ceb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.961143195Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b7b4039-ceca-4352-a62f-68a509910ceb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:02 pause-112077 crio[2455]: time="2024-07-29 02:00:02.961455331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7cb97813bee570d4515e57d507a3c09e62757876fe536363a3f14b3262d8f568,PodSandboxId:7155380eee197110aa1080df536f0fed89f4cf1b72deb6c83c4981330cef6feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722218378900906750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e233f10ca65887d6f7104393588a521b,},Annotations:map[string]string{io.kubernetes.container.hash: d0866b8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee3b6f7935d7110fe5162cfb681e14f2b52f10c0f0df7e11ed5863c710e7424,PodSandboxId:9625bb8069e785ba3105802d07ca42e2b4db572b8bb4d497c3bbc517afdb82e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722218378876703189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee17b01eb59302a56a478e8f065fe54,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a7c3c5f8ebd391af7f184e6fb1a61f0346608de8c1d8c83a4621443130db53,PodSandboxId:f6fd9e5342ad93d73672d00fbd514d20338707b975f83aa6e6abe70ac662e382,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722218378851412305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0214a0633ffd83097e82bb4653d76e15,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b681f3330bed484ab256fb5cdb2d69276102895fc24c7984cd8298d3e48d63,PodSandboxId:b7c26063b3953be16b16695c3eebdf9f9914dfdffa3d889f80f29e974c5889d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722218376776796057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d466c7c5637c35513d90103d11d837ec,},Annotations:map[string]string{io.kubernetes.container.hash: 82629e81,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6577660f08ced09a77d6f9c12a6fe589132ad5be89e697d6db7f740a6c16e4,PodSandboxId:44ffcdb427f6e20d7ea417bae9607e5255b28f3bf9678baebbcd2bc004f4ce28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722218371776491915,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6zq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdb95840,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a65d7355efd21be160e5a3dfca198433a37d8ef776a86984538859be542035,PodSandboxId:8eb5c1b53ab2616e65a12dbdc859e6e5346ead662b1e38bf340e823f1b1389c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722218356143375139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2krfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 709db69f-5c21-49dd-b30d-3395f0043e30,},Annotations:map[string]string{io.kubernetes.container.hash: e59a1d27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b922622ee312381369aaa446b5c7db776cc251adbcb6b34e565512671f9e50,PodSandboxId:44ffcdb427f6e20d7ea417bae9607e5255b28f3bf9678baebbcd2bc004f4ce28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722218355418910612,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6zq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdb958
40,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d73da1cfbd34155ca352d5d60df41bc58831aa78fffe4950273ff80b41afcc0,PodSandboxId:b7c26063b3953be16b16695c3eebdf9f9914dfdffa3d889f80f29e974c5889d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722218355412355999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d466c7c5637c35513d90103d11d837ec,},Annotations:map[string]string{io.kubernetes.container.hash: 82629e81,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a42262e085870cb0271aeb77fa37cf2df79478b07b02a0e202030aa7841d9c,PodSandboxId:9625bb8069e785ba3105802d07ca42e2b4db572b8bb4d497c3bbc517afdb82e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722218355350158413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee17b01eb59302a56a478e8f065fe54,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae9f347b1aff4a099b4dca75aac9fe8fb72e4e291b52ee9c8a7c90949e11a35,PodSandboxId:f6fd9e5342ad93d73672d00fbd514d20338707b975f83aa6e6abe70ac662e382,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722218355267881686,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0214a0633ffd83097e82bb4653d76e15,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4385bc3017a2edbbeeb6961df651205eef4a8ada814e2ade19898ef4ec240209,PodSandboxId:7155380eee197110aa1080df536f0fed89f4cf1b72deb6c83c4981330cef6feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722218355241991658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e233f10ca65887d6f7104393588a521b,},Annotations:map[string]string{io.kubernetes.container.hash: d0866b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccfb93cf0ded9005433905729425ef28b3eaf3c3b21e5e8b24486dac34ca2cf3,PodSandboxId:d9b1baebff90b336501c86c7f696de67dd1b64fdde9ef4c020e00cc97edc3d02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722218300133224421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2krfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 709db69f-5c21-49dd-b30d-3395f0043e30,},Annotations:map[string]string{io.kubernetes.container.hash: e59a1d27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b7b4039-ceca-4352-a62f-68a509910ceb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7cb97813bee57       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   24 seconds ago       Running             kube-apiserver            2                   7155380eee197       kube-apiserver-pause-112077
	7ee3b6f7935d7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   24 seconds ago       Running             kube-scheduler            2                   9625bb8069e78       kube-scheduler-pause-112077
	e2a7c3c5f8ebd       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   24 seconds ago       Running             kube-controller-manager   2                   f6fd9e5342ad9       kube-controller-manager-pause-112077
	27b681f3330be       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   26 seconds ago       Running             etcd                      2                   b7c26063b3953       etcd-pause-112077
	4b6577660f08c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   31 seconds ago       Running             kube-proxy                2                   44ffcdb427f6e       kube-proxy-m6zq2
	e3a65d7355efd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   46 seconds ago       Running             coredns                   1                   8eb5c1b53ab26       coredns-7db6d8ff4d-2krfb
	74b922622ee31       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   47 seconds ago       Exited              kube-proxy                1                   44ffcdb427f6e       kube-proxy-m6zq2
	9d73da1cfbd34       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   47 seconds ago       Exited              etcd                      1                   b7c26063b3953       etcd-pause-112077
	85a42262e0858       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   47 seconds ago       Exited              kube-scheduler            1                   9625bb8069e78       kube-scheduler-pause-112077
	8ae9f347b1aff       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   47 seconds ago       Exited              kube-controller-manager   1                   f6fd9e5342ad9       kube-controller-manager-pause-112077
	4385bc3017a2e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   47 seconds ago       Exited              kube-apiserver            1                   7155380eee197       kube-apiserver-pause-112077
	ccfb93cf0ded9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   d9b1baebff90b       coredns-7db6d8ff4d-2krfb
	
	
	==> coredns [ccfb93cf0ded9005433905729425ef28b3eaf3c3b21e5e8b24486dac34ca2cf3] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1200011102]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:58:20.792) (total time: 30003ms):
	Trace[1200011102]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (01:58:50.796)
	Trace[1200011102]: [30.003945038s] [30.003945038s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1289154887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:58:20.792) (total time: 30004ms):
	Trace[1289154887]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (01:58:50.795)
	Trace[1289154887]: [30.004481672s] [30.004481672s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1566846437]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:58:20.794) (total time: 30002ms):
	Trace[1566846437]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (01:58:50.796)
	Trace[1566846437]: [30.002604991s] [30.002604991s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:42162 - 39328 "HINFO IN 3105883890315905101.6052643386566175906. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013614543s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e3a65d7355efd21be160e5a3dfca198433a37d8ef776a86984538859be542035] <==
	Trace[1917326646]: [10.001535742s] [10.001535742s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[2086429211]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:59:16.615) (total time: 10001ms):
	Trace[2086429211]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (01:59:26.616)
	Trace[2086429211]: [10.001705831s] [10.001705831s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1173472798]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:59:16.619) (total time: 10007ms):
	Trace[1173472798]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10007ms (01:59:26.626)
	Trace[1173472798]: [10.007268815s] [10.007268815s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:59600->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[974364603]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:59:27.573) (total time: 10164ms):
	Trace[974364603]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:59600->10.96.0.1:443: read: connection reset by peer 10164ms (01:59:37.737)
	Trace[974364603]: [10.164490562s] [10.164490562s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:59600->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:59616->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:59616->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:59614->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1032750168]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:59:27.693) (total time: 10044ms):
	Trace[1032750168]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:59614->10.96.0.1:443: read: connection reset by peer 10044ms (01:59:37.738)
	Trace[1032750168]: [10.044967684s] [10.044967684s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:59614->10.96.0.1:443: read: connection reset by peer
	
	
	==> describe nodes <==
	Name:               pause-112077
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-112077
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=pause-112077
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T01_58_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:58:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-112077
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 02:00:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:59:42 +0000   Mon, 29 Jul 2024 01:57:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:59:42 +0000   Mon, 29 Jul 2024 01:57:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:59:42 +0000   Mon, 29 Jul 2024 01:57:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:59:42 +0000   Mon, 29 Jul 2024 01:58:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    pause-112077
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5480a6146014bd185ab01e674c9d5a1
	  System UUID:                f5480a61-4601-4bd1-85ab-01e674c9d5a1
	  Boot ID:                    6344c9eb-2544-4631-a978-91179b4d3a14
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2krfb                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     106s
	  kube-system                 etcd-pause-112077                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m
	  kube-system                 kube-apiserver-pause-112077             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-controller-manager-pause-112077    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-proxy-m6zq2                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-scheduler-pause-112077             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 20s                  kube-proxy       
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  2m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m5s (x7 over 2m6s)  kubelet          Node pause-112077 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m6s)  kubelet          Node pause-112077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m6s)  kubelet          Node pause-112077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node pause-112077 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node pause-112077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node pause-112077 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  NodeReady                119s                 kubelet          Node pause-112077 status is now: NodeReady
	  Normal  RegisteredNode           107s                 node-controller  Node pause-112077 event: Registered Node pause-112077 in Controller
	  Normal  Starting                 25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)    kubelet          Node pause-112077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)    kubelet          Node pause-112077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)    kubelet          Node pause-112077 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                   node-controller  Node pause-112077 event: Registered Node pause-112077 in Controller
	
	
	==> dmesg <==
	[  +0.062640] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074406] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.165903] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.135502] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.309431] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.514993] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.070540] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.137350] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.957399] kauditd_printk_skb: 57 callbacks suppressed
	[Jul29 01:58] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.085426] kauditd_printk_skb: 30 callbacks suppressed
	[  +7.004586] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.343691] systemd-fstab-generator[1502]: Ignoring "noauto" option for root device
	[ +12.985996] kauditd_printk_skb: 89 callbacks suppressed
	[Jul29 01:59] systemd-fstab-generator[2374]: Ignoring "noauto" option for root device
	[  +0.138890] systemd-fstab-generator[2386]: Ignoring "noauto" option for root device
	[  +0.167015] systemd-fstab-generator[2400]: Ignoring "noauto" option for root device
	[  +0.133439] systemd-fstab-generator[2412]: Ignoring "noauto" option for root device
	[  +0.281950] systemd-fstab-generator[2440]: Ignoring "noauto" option for root device
	[  +6.062575] systemd-fstab-generator[2566]: Ignoring "noauto" option for root device
	[  +0.076163] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.548544] kauditd_printk_skb: 87 callbacks suppressed
	[ +10.961664] systemd-fstab-generator[3392]: Ignoring "noauto" option for root device
	[  +4.541296] kauditd_printk_skb: 38 callbacks suppressed
	[ +15.794043] systemd-fstab-generator[3738]: Ignoring "noauto" option for root device
	
	
	==> etcd [27b681f3330bed484ab256fb5cdb2d69276102895fc24c7984cd8298d3e48d63] <==
	{"level":"warn","ts":"2024-07-29T01:59:42.764887Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T01:59:42.461569Z","time spent":"303.316761ms","remote":"127.0.0.1:57028","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-07-29T01:59:42.764315Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"302.820431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T01:59:42.765441Z","caller":"traceutil/trace.go:171","msg":"trace[2010001286] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:0; response_revision:441; }","duration":"303.944476ms","start":"2024-07-29T01:59:42.461486Z","end":"2024-07-29T01:59:42.76543Z","steps":["trace[2010001286] 'range keys from in-memory index tree'  (duration: 302.744772ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T01:59:42.765479Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T01:59:42.461466Z","time spent":"303.992282ms","remote":"127.0.0.1:56512","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":29,"request content":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" "}
	{"level":"warn","ts":"2024-07-29T01:59:42.765004Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.636924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2024-07-29T01:59:42.765682Z","caller":"traceutil/trace.go:171","msg":"trace[539742882] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:441; }","duration":"191.318085ms","start":"2024-07-29T01:59:42.57434Z","end":"2024-07-29T01:59:42.765658Z","steps":["trace[539742882] 'agreement among raft nodes before linearized reading'  (duration: 190.561777ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T01:59:42.765048Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.657255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2024-07-29T01:59:42.765814Z","caller":"traceutil/trace.go:171","msg":"trace[2144773494] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:441; }","duration":"191.447861ms","start":"2024-07-29T01:59:42.574358Z","end":"2024-07-29T01:59:42.765806Z","steps":["trace[2144773494] 'agreement among raft nodes before linearized reading'  (duration: 190.671088ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T01:59:43.114107Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.248747ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16526399720541034633 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-112077.17e68c7e7ef2a3e8\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-112077.17e68c7e7ef2a3e8\" value_size:462 lease:7303027683686258821 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T01:59:43.114419Z","caller":"traceutil/trace.go:171","msg":"trace[182135047] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"315.495584ms","start":"2024-07-29T01:59:42.798912Z","end":"2024-07-29T01:59:43.114407Z","steps":["trace[182135047] 'process raft request'  (duration: 315.455052ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T01:59:43.114595Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"344.253675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-2krfb\" ","response":"range_response_count:1 size:4729"}
	{"level":"info","ts":"2024-07-29T01:59:43.114691Z","caller":"traceutil/trace.go:171","msg":"trace[201774229] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-2krfb; range_end:; response_count:1; response_revision:444; }","duration":"344.402029ms","start":"2024-07-29T01:59:42.770266Z","end":"2024-07-29T01:59:43.114668Z","steps":["trace[201774229] 'agreement among raft nodes before linearized reading'  (duration: 344.188208ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T01:59:43.114742Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T01:59:42.770255Z","time spent":"344.477939ms","remote":"127.0.0.1:56696","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4753,"request content":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-2krfb\" "}
	{"level":"warn","ts":"2024-07-29T01:59:43.11497Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T01:59:42.798893Z","time spent":"315.595579ms","remote":"127.0.0.1:56696","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4546,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-112077\" mod_revision:303 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-112077\" value_size:4484 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-112077\" > >"}
	{"level":"info","ts":"2024-07-29T01:59:43.115004Z","caller":"traceutil/trace.go:171","msg":"trace[226788179] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"344.979719ms","start":"2024-07-29T01:59:42.770012Z","end":"2024-07-29T01:59:43.114992Z","steps":["trace[226788179] 'process raft request'  (duration: 125.425991ms)","trace[226788179] 'compare'  (duration: 218.133639ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T01:59:43.115117Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T01:59:42.769997Z","time spent":"345.091642ms","remote":"127.0.0.1:57028","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":534,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-112077.17e68c7e7ef2a3e8\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-112077.17e68c7e7ef2a3e8\" value_size:462 lease:7303027683686258821 >> failure:<>"}
	{"level":"info","ts":"2024-07-29T01:59:43.114424Z","caller":"traceutil/trace.go:171","msg":"trace[1471502505] linearizableReadLoop","detail":"{readStateIndex:467; appliedIndex:466; }","duration":"344.060015ms","start":"2024-07-29T01:59:42.770334Z","end":"2024-07-29T01:59:43.114394Z","steps":["trace[1471502505] 'read index received'  (duration: 125.113345ms)","trace[1471502505] 'applied index is now lower than readState.Index'  (duration: 218.944786ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T01:59:43.115363Z","caller":"traceutil/trace.go:171","msg":"trace[1823201464] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"326.984702ms","start":"2024-07-29T01:59:42.78837Z","end":"2024-07-29T01:59:43.115354Z","steps":["trace[1823201464] 'process raft request'  (duration: 325.91442ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T01:59:43.11544Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T01:59:42.78835Z","time spent":"327.060808ms","remote":"127.0.0.1:56688","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5412,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/minions/pause-112077\" mod_revision:415 > success:<request_put:<key:\"/registry/minions/pause-112077\" value_size:5374 >> failure:<request_range:<key:\"/registry/minions/pause-112077\" > >"}
	{"level":"warn","ts":"2024-07-29T01:59:43.115674Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"345.069097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-m6zq2\" ","response":"range_response_count:1 size:4590"}
	{"level":"info","ts":"2024-07-29T01:59:43.115719Z","caller":"traceutil/trace.go:171","msg":"trace[1646549402] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-m6zq2; range_end:; response_count:1; response_revision:444; }","duration":"345.131818ms","start":"2024-07-29T01:59:42.77058Z","end":"2024-07-29T01:59:43.115711Z","steps":["trace[1646549402] 'agreement among raft nodes before linearized reading'  (duration: 345.027062ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T01:59:43.115742Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T01:59:42.770565Z","time spent":"345.171117ms","remote":"127.0.0.1:56696","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":4614,"request content":"key:\"/registry/pods/kube-system/kube-proxy-m6zq2\" "}
	{"level":"info","ts":"2024-07-29T01:59:43.236776Z","caller":"traceutil/trace.go:171","msg":"trace[1756128147] linearizableReadLoop","detail":"{readStateIndex:470; appliedIndex:469; }","duration":"101.343222ms","start":"2024-07-29T01:59:43.135417Z","end":"2024-07-29T01:59:43.236761Z","steps":["trace[1756128147] 'read index received'  (duration: 94.395697ms)","trace[1756128147] 'applied index is now lower than readState.Index'  (duration: 6.947003ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T01:59:43.237325Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.892387ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" ","response":"range_response_count:53 size:37203"}
	{"level":"info","ts":"2024-07-29T01:59:43.23739Z","caller":"traceutil/trace.go:171","msg":"trace[192355523] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:53; response_revision:445; }","duration":"101.978648ms","start":"2024-07-29T01:59:43.135402Z","end":"2024-07-29T01:59:43.23738Z","steps":["trace[192355523] 'agreement among raft nodes before linearized reading'  (duration: 101.563556ms)"],"step_count":1}
	
	
	==> etcd [9d73da1cfbd34155ca352d5d60df41bc58831aa78fffe4950273ff80b41afcc0] <==
	{"level":"info","ts":"2024-07-29T01:59:15.884565Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"40.877829ms"}
	{"level":"info","ts":"2024-07-29T01:59:15.928053Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-29T01:59:15.989268Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"eaed0234649c774e","local-member-id":"cde0bb267fc4e559","commit-index":459}
	{"level":"info","ts":"2024-07-29T01:59:15.989605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-29T01:59:15.989749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 became follower at term 2"}
	{"level":"info","ts":"2024-07-29T01:59:15.989762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft cde0bb267fc4e559 [peers: [], term: 2, commit: 459, applied: 0, lastindex: 459, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-29T01:59:15.993192Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-29T01:59:16.019599Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":438}
	{"level":"info","ts":"2024-07-29T01:59:16.029752Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-29T01:59:16.036437Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"cde0bb267fc4e559","timeout":"7s"}
	{"level":"info","ts":"2024-07-29T01:59:16.037088Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"cde0bb267fc4e559"}
	{"level":"info","ts":"2024-07-29T01:59:16.037126Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"cde0bb267fc4e559","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-29T01:59:16.037436Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T01:59:16.037596Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T01:59:16.037635Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T01:59:16.037645Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T01:59:16.037891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 switched to configuration voters=(14835062946585175385)"}
	{"level":"info","ts":"2024-07-29T01:59:16.038435Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"eaed0234649c774e","local-member-id":"cde0bb267fc4e559","added-peer-id":"cde0bb267fc4e559","added-peer-peer-urls":["https://192.168.39.22:2380"]}
	{"level":"info","ts":"2024-07-29T01:59:16.038553Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"eaed0234649c774e","local-member-id":"cde0bb267fc4e559","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:59:16.03858Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:59:16.045076Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T01:59:16.045172Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2024-07-29T01:59:16.045476Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2024-07-29T01:59:16.0474Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"cde0bb267fc4e559","initial-advertise-peer-urls":["https://192.168.39.22:2380"],"listen-peer-urls":["https://192.168.39.22:2380"],"advertise-client-urls":["https://192.168.39.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T01:59:16.047433Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> kernel <==
	 02:00:03 up 2 min,  0 users,  load average: 0.63, 0.35, 0.14
	Linux pause-112077 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4385bc3017a2edbbeeb6961df651205eef4a8ada814e2ade19898ef4ec240209] <==
	I0729 01:59:16.153310       1 server.go:148] Version: v1.30.3
	I0729 01:59:16.153400       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0729 01:59:16.750119       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:16.750356       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0729 01:59:16.751552       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 01:59:16.758181       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 01:59:16.763122       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 01:59:16.763238       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 01:59:16.763436       1 instance.go:299] Using reconciler: lease
	W0729 01:59:16.765032       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:17.751170       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:17.751242       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:17.766595       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:19.492464       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:19.528874       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:19.549371       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:21.695452       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:22.378846       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:22.442362       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:26.081666       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:26.766074       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:27.323643       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:32.354141       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:32.494642       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:32.799131       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [7cb97813bee570d4515e57d507a3c09e62757876fe536363a3f14b3262d8f568] <==
	I0729 01:59:42.318115       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 01:59:42.320443       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 01:59:42.342706       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 01:59:42.346085       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 01:59:42.370114       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0729 01:59:42.766443       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 01:59:43.118088       1 trace.go:236] Trace[1720699490]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:29311928-7a74-415d-8c9b-d234238eb000,client:192.168.39.22,api-group:events.k8s.io,api-version:v1,name:,subresource:,namespace:default,protocol:HTTP/2.0,resource:events,scope:resource,url:/apis/events.k8s.io/v1/namespaces/default/events,user-agent:kube-proxy/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (29-Jul-2024 01:59:42.454) (total time: 663ms):
	Trace[1720699490]: ["Create etcd3" audit-id:29311928-7a74-415d-8c9b-d234238eb000,key:/events/default/pause-112077.17e68c7e7ef2a3e8,type:*core.Event,resource:events 662ms (01:59:42.455)
	Trace[1720699490]:  ---"TransformToStorage succeeded" 312ms (01:59:42.768)
	Trace[1720699490]:  ---"Txn call succeeded" 349ms (01:59:43.117)]
	Trace[1720699490]: [663.474629ms] [663.474629ms] END
	I0729 01:59:43.121011       1 trace.go:236] Trace[461850243]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:14b98d19-ea1e-42a0-9dc0-3adb7eb8939d,client:192.168.39.22,api-group:,api-version:v1,name:coredns,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/coredns/token,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (29-Jul-2024 01:59:42.572) (total time: 548ms):
	Trace[461850243]: ---"watchCache locked acquired" 545ms (01:59:43.118)
	Trace[461850243]: [548.331413ms] [548.331413ms] END
	I0729 01:59:43.122732       1 trace.go:236] Trace[1148024247]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:efa571ca-87ca-49f7-9624-3c1be483e0de,client:192.168.39.22,api-group:,api-version:v1,name:kube-proxy,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (29-Jul-2024 01:59:42.572) (total time: 550ms):
	Trace[1148024247]: ---"watchCache locked acquired" 545ms (01:59:43.118)
	Trace[1148024247]: [550.283395ms] [550.283395ms] END
	I0729 01:59:43.134326       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 01:59:44.128278       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 01:59:44.150407       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 01:59:44.193562       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 01:59:44.224802       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 01:59:44.234111       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 01:59:55.111563       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 01:59:55.260848       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8ae9f347b1aff4a099b4dca75aac9fe8fb72e4e291b52ee9c8a7c90949e11a35] <==
	I0729 01:59:16.952516       1 serving.go:380] Generated self-signed cert in-memory
	I0729 01:59:17.306274       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 01:59:17.306368       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:59:17.308045       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 01:59:17.308575       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 01:59:17.308587       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 01:59:17.308609       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [e2a7c3c5f8ebd391af7f184e6fb1a61f0346608de8c1d8c83a4621443130db53] <==
	I0729 01:59:55.116625       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0729 01:59:55.122754       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0729 01:59:55.134381       1 shared_informer.go:320] Caches are synced for PV protection
	I0729 01:59:55.137773       1 shared_informer.go:320] Caches are synced for taint
	I0729 01:59:55.137863       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 01:59:55.138073       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 01:59:55.138360       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-112077"
	I0729 01:59:55.138509       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 01:59:55.143051       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0729 01:59:55.145466       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0729 01:59:55.200191       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 01:59:55.209230       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 01:59:55.229085       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 01:59:55.231548       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 01:59:55.248741       1 shared_informer.go:320] Caches are synced for expand
	I0729 01:59:55.257329       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 01:59:55.307102       1 shared_informer.go:320] Caches are synced for disruption
	I0729 01:59:55.319484       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 01:59:55.320270       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 01:59:55.320460       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.766µs"
	I0729 01:59:55.331779       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 01:59:55.356167       1 shared_informer.go:320] Caches are synced for deployment
	I0729 01:59:55.752764       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 01:59:55.752911       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 01:59:55.770026       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [4b6577660f08ced09a77d6f9c12a6fe589132ad5be89e697d6db7f740a6c16e4] <==
	I0729 01:59:31.902071       1 server_linux.go:69] "Using iptables proxy"
	E0729 01:59:37.737805       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-112077\": dial tcp 192.168.39.22:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.22:47348->192.168.39.22:8443: read: connection reset by peer"
	E0729 01:59:38.777366       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-112077\": dial tcp 192.168.39.22:8443: connect: connection refused"
	I0729 01:59:42.297403       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.22"]
	I0729 01:59:42.425141       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 01:59:42.425349       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 01:59:42.425522       1 server_linux.go:165] "Using iptables Proxier"
	I0729 01:59:42.433745       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 01:59:42.434743       1 server.go:872] "Version info" version="v1.30.3"
	I0729 01:59:42.434823       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:59:42.443211       1 config.go:192] "Starting service config controller"
	I0729 01:59:42.448045       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 01:59:42.443670       1 config.go:101] "Starting endpoint slice config controller"
	I0729 01:59:42.448115       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 01:59:42.444532       1 config.go:319] "Starting node config controller"
	I0729 01:59:42.448129       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 01:59:42.548489       1 shared_informer.go:320] Caches are synced for node config
	I0729 01:59:42.548519       1 shared_informer.go:320] Caches are synced for service config
	I0729 01:59:42.548559       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [74b922622ee312381369aaa446b5c7db776cc251adbcb6b34e565512671f9e50] <==
	
	
	==> kube-scheduler [7ee3b6f7935d7110fe5162cfb681e14f2b52f10c0f0df7e11ed5863c710e7424] <==
	I0729 01:59:39.882831       1 serving.go:380] Generated self-signed cert in-memory
	W0729 01:59:42.229672       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 01:59:42.229781       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 01:59:42.229796       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 01:59:42.229806       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 01:59:42.357741       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 01:59:42.357791       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:59:42.368298       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 01:59:42.371056       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 01:59:42.371205       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 01:59:42.371299       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 01:59:42.474111       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [85a42262e085870cb0271aeb77fa37cf2df79478b07b02a0e202030aa7841d9c] <==
	I0729 01:59:16.923241       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.597887    3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0214a0633ffd83097e82bb4653d76e15-kubeconfig\") pod \"kube-controller-manager-pause-112077\" (UID: \"0214a0633ffd83097e82bb4653d76e15\") " pod="kube-system/kube-controller-manager-pause-112077"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.597905    3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bee17b01eb59302a56a478e8f065fe54-kubeconfig\") pod \"kube-scheduler-pause-112077\" (UID: \"bee17b01eb59302a56a478e8f065fe54\") " pod="kube-system/kube-scheduler-pause-112077"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.597989    3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/d466c7c5637c35513d90103d11d837ec-etcd-certs\") pod \"etcd-pause-112077\" (UID: \"d466c7c5637c35513d90103d11d837ec\") " pod="kube-system/etcd-pause-112077"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.598011    3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/d466c7c5637c35513d90103d11d837ec-etcd-data\") pod \"etcd-pause-112077\" (UID: \"d466c7c5637c35513d90103d11d837ec\") " pod="kube-system/etcd-pause-112077"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.598030    3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e233f10ca65887d6f7104393588a521b-k8s-certs\") pod \"kube-apiserver-pause-112077\" (UID: \"e233f10ca65887d6f7104393588a521b\") " pod="kube-system/kube-apiserver-pause-112077"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.686378    3399 kubelet_node_status.go:73] "Attempting to register node" node="pause-112077"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: E0729 01:59:38.687635    3399 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.22:8443: connect: connection refused" node="pause-112077"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.821379    3399 scope.go:117] "RemoveContainer" containerID="8ae9f347b1aff4a099b4dca75aac9fe8fb72e4e291b52ee9c8a7c90949e11a35"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.822647    3399 scope.go:117] "RemoveContainer" containerID="85a42262e085870cb0271aeb77fa37cf2df79478b07b02a0e202030aa7841d9c"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.824295    3399 scope.go:117] "RemoveContainer" containerID="9d73da1cfbd34155ca352d5d60df41bc58831aa78fffe4950273ff80b41afcc0"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.835816    3399 scope.go:117] "RemoveContainer" containerID="4385bc3017a2edbbeeb6961df651205eef4a8ada814e2ade19898ef4ec240209"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: E0729 01:59:38.989295    3399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-112077?timeout=10s\": dial tcp 192.168.39.22:8443: connect: connection refused" interval="800ms"
	Jul 29 01:59:39 pause-112077 kubelet[3399]: I0729 01:59:39.089244    3399 kubelet_node_status.go:73] "Attempting to register node" node="pause-112077"
	Jul 29 01:59:39 pause-112077 kubelet[3399]: E0729 01:59:39.090271    3399 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.22:8443: connect: connection refused" node="pause-112077"
	Jul 29 01:59:39 pause-112077 kubelet[3399]: I0729 01:59:39.892029    3399 kubelet_node_status.go:73] "Attempting to register node" node="pause-112077"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.359309    3399 apiserver.go:52] "Watching apiserver"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.390441    3399 topology_manager.go:215] "Topology Admit Handler" podUID="709db69f-5c21-49dd-b30d-3395f0043e30" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2krfb"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.391749    3399 topology_manager.go:215] "Topology Admit Handler" podUID="7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c" podNamespace="kube-system" podName="kube-proxy-m6zq2"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.482434    3399 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.569223    3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c-xtables-lock\") pod \"kube-proxy-m6zq2\" (UID: \"7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c\") " pod="kube-system/kube-proxy-m6zq2"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.569307    3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c-lib-modules\") pod \"kube-proxy-m6zq2\" (UID: \"7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c\") " pod="kube-system/kube-proxy-m6zq2"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.771518    3399 kubelet_node_status.go:112] "Node was previously registered" node="pause-112077"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.771675    3399 kubelet_node_status.go:76] "Successfully registered node" node="pause-112077"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.773669    3399 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.775403    3399 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 02:00:02.441512   59668 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-9421/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-112077 -n pause-112077
helpers_test.go:261: (dbg) Run:  kubectl --context pause-112077 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-112077 -n pause-112077
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-112077 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-112077 logs -n 25: (1.598000609s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-703567                | NoKubernetes-703567       | jenkins | v1.33.1 | 29 Jul 24 01:54 UTC | 29 Jul 24 01:55 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-713702             | running-upgrade-713702    | jenkins | v1.33.1 | 29 Jul 24 01:55 UTC | 29 Jul 24 01:57 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-703567                | NoKubernetes-703567       | jenkins | v1.33.1 | 29 Jul 24 01:55 UTC | 29 Jul 24 01:55 UTC |
	| start   | -p NoKubernetes-703567                | NoKubernetes-703567       | jenkins | v1.33.1 | 29 Jul 24 01:55 UTC | 29 Jul 24 01:56 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-137446 ssh cat     | force-systemd-flag-137446 | jenkins | v1.33.1 | 29 Jul 24 01:56 UTC | 29 Jul 24 01:56 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-137446          | force-systemd-flag-137446 | jenkins | v1.33.1 | 29 Jul 24 01:56 UTC | 29 Jul 24 01:56 UTC |
	| start   | -p cert-options-343391                | cert-options-343391       | jenkins | v1.33.1 | 29 Jul 24 01:56 UTC | 29 Jul 24 01:57 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-703567 sudo           | NoKubernetes-703567       | jenkins | v1.33.1 | 29 Jul 24 01:56 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-703567                | NoKubernetes-703567       | jenkins | v1.33.1 | 29 Jul 24 01:56 UTC | 29 Jul 24 01:57 UTC |
	| start   | -p NoKubernetes-703567                | NoKubernetes-703567       | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC | 29 Jul 24 01:57 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-713702             | running-upgrade-713702    | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC | 29 Jul 24 01:57 UTC |
	| start   | -p pause-112077 --memory=2048         | pause-112077              | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC | 29 Jul 24 01:59 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-343391 ssh               | cert-options-343391       | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC | 29 Jul 24 01:57 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-343391 -- sudo        | cert-options-343391       | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC | 29 Jul 24 01:57 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-343391                | cert-options-343391       | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC | 29 Jul 24 01:57 UTC |
	| start   | -p kubernetes-upgrade-211243          | kubernetes-upgrade-211243 | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-703567 sudo           | NoKubernetes-703567       | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-703567                | NoKubernetes-703567       | jenkins | v1.33.1 | 29 Jul 24 01:57 UTC | 29 Jul 24 01:57 UTC |
	| start   | -p stopped-upgrade-804241             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 01:57 UTC | 29 Jul 24 01:59 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p cert-expiration-923851             | cert-expiration-923851    | jenkins | v1.33.1 | 29 Jul 24 01:58 UTC | 29 Jul 24 01:59 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-923851             | cert-expiration-923851    | jenkins | v1.33.1 | 29 Jul 24 01:59 UTC | 29 Jul 24 01:59 UTC |
	| start   | -p pause-112077                       | pause-112077              | jenkins | v1.33.1 | 29 Jul 24 01:59 UTC | 29 Jul 24 02:00 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p auto-464146 --memory=3072          | auto-464146               | jenkins | v1.33.1 | 29 Jul 24 01:59 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-804241 stop           | minikube                  | jenkins | v1.26.0 | 29 Jul 24 01:59 UTC | 29 Jul 24 01:59 UTC |
	| start   | -p stopped-upgrade-804241             | stopped-upgrade-804241    | jenkins | v1.33.1 | 29 Jul 24 01:59 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 01:59:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 01:59:03.858701   59122 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:59:03.858966   59122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:59:03.858975   59122 out.go:304] Setting ErrFile to fd 2...
	I0729 01:59:03.858980   59122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:59:03.859201   59122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:59:03.859778   59122 out.go:298] Setting JSON to false
	I0729 01:59:03.860701   59122 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6090,"bootTime":1722212254,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 01:59:03.860767   59122 start.go:139] virtualization: kvm guest
	I0729 01:59:03.863135   59122 out.go:177] * [stopped-upgrade-804241] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 01:59:03.864626   59122 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 01:59:03.864641   59122 notify.go:220] Checking for updates...
	I0729 01:59:03.867163   59122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 01:59:03.868421   59122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:59:03.869654   59122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:59:03.870866   59122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 01:59:03.872038   59122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 01:59:03.873617   59122 config.go:182] Loaded profile config "stopped-upgrade-804241": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0729 01:59:03.873996   59122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:59:03.874072   59122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:59:03.888916   59122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44669
	I0729 01:59:03.889315   59122 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:59:03.889789   59122 main.go:141] libmachine: Using API Version  1
	I0729 01:59:03.889811   59122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:59:03.890167   59122 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:59:03.890331   59122 main.go:141] libmachine: (stopped-upgrade-804241) Calling .DriverName
	I0729 01:59:03.892303   59122 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 01:59:03.893605   59122 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 01:59:03.893902   59122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:59:03.893942   59122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:59:03.908482   59122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36315
	I0729 01:59:03.908842   59122 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:59:03.909464   59122 main.go:141] libmachine: Using API Version  1
	I0729 01:59:03.909495   59122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:59:03.909883   59122 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:59:03.910099   59122 main.go:141] libmachine: (stopped-upgrade-804241) Calling .DriverName
	I0729 01:59:03.944979   59122 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 01:59:03.946264   59122 start.go:297] selected driver: kvm2
	I0729 01:59:03.946279   59122 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-804241 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-804
241 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.165 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 01:59:03.946404   59122 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 01:59:03.947253   59122 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:59:03.947322   59122 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 01:59:03.962696   59122 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 01:59:03.963052   59122 cni.go:84] Creating CNI manager for ""
	I0729 01:59:03.963091   59122 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 01:59:03.963158   59122 start.go:340] cluster config:
	{Name:stopped-upgrade-804241 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-804241 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.165 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 01:59:03.963264   59122 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 01:59:03.965187   59122 out.go:177] * Starting "stopped-upgrade-804241" primary control-plane node in "stopped-upgrade-804241" cluster
	I0729 01:59:04.223211   57807 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 01:59:04.223845   57807 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 01:59:04.224074   57807 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 01:59:00.773251   58942 machine.go:94] provisionDockerMachine start ...
	I0729 01:59:00.773293   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:00.773594   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:00.776414   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:00.776849   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:00.776895   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:00.777070   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:00.777277   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:00.777453   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:00.777591   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:00.777734   58942 main.go:141] libmachine: Using SSH client type: native
	I0729 01:59:00.777988   58942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0729 01:59:00.778003   58942 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 01:59:00.900982   58942 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-112077
	
	I0729 01:59:00.901024   58942 main.go:141] libmachine: (pause-112077) Calling .GetMachineName
	I0729 01:59:00.901297   58942 buildroot.go:166] provisioning hostname "pause-112077"
	I0729 01:59:00.901324   58942 main.go:141] libmachine: (pause-112077) Calling .GetMachineName
	I0729 01:59:00.901512   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:00.904914   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:00.905365   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:00.905393   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:00.905621   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:00.905823   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:00.905993   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:00.906158   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:00.906313   58942 main.go:141] libmachine: Using SSH client type: native
	I0729 01:59:00.906526   58942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0729 01:59:00.906545   58942 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-112077 && echo "pause-112077" | sudo tee /etc/hostname
	I0729 01:59:01.049250   58942 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-112077
	
	I0729 01:59:01.049282   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:01.052828   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.053222   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:01.053265   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.053421   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:01.053628   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:01.054012   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:01.054213   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:01.054460   58942 main.go:141] libmachine: Using SSH client type: native
	I0729 01:59:01.054703   58942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0729 01:59:01.054727   58942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-112077' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-112077/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-112077' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 01:59:01.172295   58942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 01:59:01.172333   58942 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-9421/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-9421/.minikube}
	I0729 01:59:01.172362   58942 buildroot.go:174] setting up certificates
	I0729 01:59:01.172376   58942 provision.go:84] configureAuth start
	I0729 01:59:01.172391   58942 main.go:141] libmachine: (pause-112077) Calling .GetMachineName
	I0729 01:59:01.172665   58942 main.go:141] libmachine: (pause-112077) Calling .GetIP
	I0729 01:59:01.175394   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.175763   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:01.175800   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.175954   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:01.178094   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.178393   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:01.178426   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.178528   58942 provision.go:143] copyHostCerts
	I0729 01:59:01.178596   58942 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem, removing ...
	I0729 01:59:01.178613   58942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem
	I0729 01:59:01.178679   58942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/cert.pem (1123 bytes)
	I0729 01:59:01.178782   58942 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem, removing ...
	I0729 01:59:01.178794   58942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem
	I0729 01:59:01.178828   58942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/key.pem (1675 bytes)
	I0729 01:59:01.178894   58942 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem, removing ...
	I0729 01:59:01.178905   58942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem
	I0729 01:59:01.178932   58942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-9421/.minikube/ca.pem (1078 bytes)
	I0729 01:59:01.178991   58942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem org=jenkins.pause-112077 san=[127.0.0.1 192.168.39.22 localhost minikube pause-112077]
	I0729 01:59:01.320795   58942 provision.go:177] copyRemoteCerts
	I0729 01:59:01.320854   58942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 01:59:01.320876   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:01.324209   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.324635   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:01.324698   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.324884   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:01.325071   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:01.325233   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:01.325424   58942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/pause-112077/id_rsa Username:docker}
	I0729 01:59:01.417411   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 01:59:01.451176   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 01:59:01.480705   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0729 01:59:01.509060   58942 provision.go:87] duration metric: took 336.668444ms to configureAuth
	I0729 01:59:01.509086   58942 buildroot.go:189] setting minikube options for container-runtime
	I0729 01:59:01.509468   58942 config.go:182] Loaded profile config "pause-112077": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:59:01.509573   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:01.512733   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.513109   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:01.513138   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:01.513370   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:01.513602   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:01.513786   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:01.514002   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:01.514189   58942 main.go:141] libmachine: Using SSH client type: native
	I0729 01:59:01.514407   58942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0729 01:59:01.514429   58942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 01:59:01.061692   59039 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:59:01.061756   59039 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 01:59:01.061777   59039 cache.go:56] Caching tarball of preloaded images
	I0729 01:59:01.061864   59039 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 01:59:01.061879   59039 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 01:59:01.061998   59039 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/auto-464146/config.json ...
	I0729 01:59:01.062026   59039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/auto-464146/config.json: {Name:mk0dee52ca89978662c54ea73f7ceed742d218d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:59:01.062195   59039 start.go:360] acquireMachinesLock for auto-464146: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 01:59:07.319955   59039 start.go:364] duration metric: took 6.257735754s to acquireMachinesLock for "auto-464146"
	I0729 01:59:07.320022   59039 start.go:93] Provisioning new machine with config: &{Name:auto-464146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:auto-464146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 01:59:07.320203   59039 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 01:59:03.966334   59122 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0729 01:59:03.966388   59122 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0729 01:59:03.966412   59122 cache.go:56] Caching tarball of preloaded images
	I0729 01:59:03.966518   59122 preload.go:172] Found /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 01:59:03.966532   59122 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0729 01:59:03.966655   59122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/stopped-upgrade-804241/config.json ...
	I0729 01:59:03.966929   59122 start.go:360] acquireMachinesLock for stopped-upgrade-804241: {Name:mk7869d18a6cc8cac10e2f8b84e70cbd6e51bf8d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 01:59:09.224611   57807 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 01:59:09.224876   57807 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 01:59:07.074488   58942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 01:59:07.074537   58942 machine.go:97] duration metric: took 6.301250035s to provisionDockerMachine
	I0729 01:59:07.074548   58942 start.go:293] postStartSetup for "pause-112077" (driver="kvm2")
	I0729 01:59:07.074558   58942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 01:59:07.074571   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:07.075012   58942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 01:59:07.075043   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:07.078131   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.078524   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:07.078562   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.078713   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:07.078898   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:07.079076   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:07.079216   58942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/pause-112077/id_rsa Username:docker}
	I0729 01:59:07.166421   58942 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 01:59:07.171015   58942 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 01:59:07.171041   58942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/addons for local assets ...
	I0729 01:59:07.171114   58942 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-9421/.minikube/files for local assets ...
	I0729 01:59:07.171203   58942 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem -> 166232.pem in /etc/ssl/certs
	I0729 01:59:07.171301   58942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 01:59:07.180907   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:59:07.204797   58942 start.go:296] duration metric: took 130.235759ms for postStartSetup
	I0729 01:59:07.204851   58942 fix.go:56] duration metric: took 6.791691719s for fixHost
	I0729 01:59:07.204875   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:07.207558   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.207928   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:07.207955   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.208138   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:07.208322   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:07.208492   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:07.208600   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:07.208768   58942 main.go:141] libmachine: Using SSH client type: native
	I0729 01:59:07.208941   58942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0729 01:59:07.208950   58942 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 01:59:07.319802   58942 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722218347.301546106
	
	I0729 01:59:07.319822   58942 fix.go:216] guest clock: 1722218347.301546106
	I0729 01:59:07.319831   58942 fix.go:229] Guest: 2024-07-29 01:59:07.301546106 +0000 UTC Remote: 2024-07-29 01:59:07.204855132 +0000 UTC m=+6.986045348 (delta=96.690974ms)
	I0729 01:59:07.319870   58942 fix.go:200] guest clock delta is within tolerance: 96.690974ms
	I0729 01:59:07.319876   58942 start.go:83] releasing machines lock for "pause-112077", held for 6.90672832s
	I0729 01:59:07.319909   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:07.320195   58942 main.go:141] libmachine: (pause-112077) Calling .GetIP
	I0729 01:59:07.323127   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.323540   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:07.323565   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.323731   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:07.324310   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:07.324481   58942 main.go:141] libmachine: (pause-112077) Calling .DriverName
	I0729 01:59:07.324537   58942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 01:59:07.324573   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:07.324713   58942 ssh_runner.go:195] Run: cat /version.json
	I0729 01:59:07.324743   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHHostname
	I0729 01:59:07.327319   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.327520   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.327710   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:07.327728   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.327825   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:07.327850   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:07.327912   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:07.328077   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHPort
	I0729 01:59:07.328100   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:07.328213   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHKeyPath
	I0729 01:59:07.328406   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:07.328449   58942 main.go:141] libmachine: (pause-112077) Calling .GetSSHUsername
	I0729 01:59:07.328563   58942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/pause-112077/id_rsa Username:docker}
	I0729 01:59:07.328618   58942 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/pause-112077/id_rsa Username:docker}
	I0729 01:59:07.428600   58942 ssh_runner.go:195] Run: systemctl --version
	I0729 01:59:07.435188   58942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 01:59:07.590651   58942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 01:59:07.596996   58942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 01:59:07.597060   58942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 01:59:07.606543   58942 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 01:59:07.606564   58942 start.go:495] detecting cgroup driver to use...
	I0729 01:59:07.606630   58942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 01:59:07.623623   58942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 01:59:07.638061   58942 docker.go:217] disabling cri-docker service (if available) ...
	I0729 01:59:07.638115   58942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 01:59:07.653649   58942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 01:59:07.668951   58942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 01:59:07.810006   58942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 01:59:07.949024   58942 docker.go:233] disabling docker service ...
	I0729 01:59:07.949101   58942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 01:59:07.967184   58942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 01:59:07.981535   58942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 01:59:08.110180   58942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 01:59:08.241015   58942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 01:59:08.256868   58942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 01:59:08.278103   58942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 01:59:08.278162   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.289116   58942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 01:59:08.289174   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.299982   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.311401   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.322358   58942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 01:59:08.333572   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.344638   58942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.356654   58942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 01:59:08.368734   58942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 01:59:08.379455   58942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 01:59:08.389988   58942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:59:08.536994   58942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 01:59:07.322135   59039 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 01:59:07.322351   59039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:59:07.322407   59039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:59:07.338207   59039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0729 01:59:07.338630   59039 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:59:07.339181   59039 main.go:141] libmachine: Using API Version  1
	I0729 01:59:07.339200   59039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:59:07.339614   59039 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:59:07.339813   59039 main.go:141] libmachine: (auto-464146) Calling .GetMachineName
	I0729 01:59:07.339965   59039 main.go:141] libmachine: (auto-464146) Calling .DriverName
	I0729 01:59:07.340115   59039 start.go:159] libmachine.API.Create for "auto-464146" (driver="kvm2")
	I0729 01:59:07.340155   59039 client.go:168] LocalClient.Create starting
	I0729 01:59:07.340204   59039 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem
	I0729 01:59:07.340252   59039 main.go:141] libmachine: Decoding PEM data...
	I0729 01:59:07.340276   59039 main.go:141] libmachine: Parsing certificate...
	I0729 01:59:07.340345   59039 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem
	I0729 01:59:07.340373   59039 main.go:141] libmachine: Decoding PEM data...
	I0729 01:59:07.340395   59039 main.go:141] libmachine: Parsing certificate...
	I0729 01:59:07.340423   59039 main.go:141] libmachine: Running pre-create checks...
	I0729 01:59:07.340443   59039 main.go:141] libmachine: (auto-464146) Calling .PreCreateCheck
	I0729 01:59:07.340811   59039 main.go:141] libmachine: (auto-464146) Calling .GetConfigRaw
	I0729 01:59:07.341282   59039 main.go:141] libmachine: Creating machine...
	I0729 01:59:07.341300   59039 main.go:141] libmachine: (auto-464146) Calling .Create
	I0729 01:59:07.341450   59039 main.go:141] libmachine: (auto-464146) Creating KVM machine...
	I0729 01:59:07.342674   59039 main.go:141] libmachine: (auto-464146) DBG | found existing default KVM network
	I0729 01:59:07.343988   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:07.343807   59157 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:61:72:15} reservation:<nil>}
	I0729 01:59:07.345074   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:07.344992   59157 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015f50}
	I0729 01:59:07.345144   59039 main.go:141] libmachine: (auto-464146) DBG | created network xml: 
	I0729 01:59:07.345162   59039 main.go:141] libmachine: (auto-464146) DBG | <network>
	I0729 01:59:07.345169   59039 main.go:141] libmachine: (auto-464146) DBG |   <name>mk-auto-464146</name>
	I0729 01:59:07.345180   59039 main.go:141] libmachine: (auto-464146) DBG |   <dns enable='no'/>
	I0729 01:59:07.345191   59039 main.go:141] libmachine: (auto-464146) DBG |   
	I0729 01:59:07.345201   59039 main.go:141] libmachine: (auto-464146) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0729 01:59:07.345214   59039 main.go:141] libmachine: (auto-464146) DBG |     <dhcp>
	I0729 01:59:07.345228   59039 main.go:141] libmachine: (auto-464146) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0729 01:59:07.345251   59039 main.go:141] libmachine: (auto-464146) DBG |     </dhcp>
	I0729 01:59:07.345265   59039 main.go:141] libmachine: (auto-464146) DBG |   </ip>
	I0729 01:59:07.345274   59039 main.go:141] libmachine: (auto-464146) DBG |   
	I0729 01:59:07.345285   59039 main.go:141] libmachine: (auto-464146) DBG | </network>
	I0729 01:59:07.345297   59039 main.go:141] libmachine: (auto-464146) DBG | 
	I0729 01:59:07.351136   59039 main.go:141] libmachine: (auto-464146) DBG | trying to create private KVM network mk-auto-464146 192.168.50.0/24...
	I0729 01:59:07.422943   59039 main.go:141] libmachine: (auto-464146) DBG | private KVM network mk-auto-464146 192.168.50.0/24 created
	I0729 01:59:07.422978   59039 main.go:141] libmachine: (auto-464146) Setting up store path in /home/jenkins/minikube-integration/19312-9421/.minikube/machines/auto-464146 ...
	I0729 01:59:07.422990   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:07.422907   59157 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:59:07.423002   59039 main.go:141] libmachine: (auto-464146) Building disk image from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 01:59:07.423085   59039 main.go:141] libmachine: (auto-464146) Downloading /home/jenkins/minikube-integration/19312-9421/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 01:59:07.668235   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:07.668115   59157 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/auto-464146/id_rsa...
	I0729 01:59:07.798049   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:07.797891   59157 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/auto-464146/auto-464146.rawdisk...
	I0729 01:59:07.798080   59039 main.go:141] libmachine: (auto-464146) DBG | Writing magic tar header
	I0729 01:59:07.798094   59039 main.go:141] libmachine: (auto-464146) DBG | Writing SSH key tar header
	I0729 01:59:07.798106   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:07.798002   59157 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/auto-464146 ...
	I0729 01:59:07.798120   59039 main.go:141] libmachine: (auto-464146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines/auto-464146
	I0729 01:59:07.798149   59039 main.go:141] libmachine: (auto-464146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube/machines
	I0729 01:59:07.798160   59039 main.go:141] libmachine: (auto-464146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:59:07.798180   59039 main.go:141] libmachine: (auto-464146) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines/auto-464146 (perms=drwx------)
	I0729 01:59:07.798194   59039 main.go:141] libmachine: (auto-464146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-9421
	I0729 01:59:07.798206   59039 main.go:141] libmachine: (auto-464146) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 01:59:07.798220   59039 main.go:141] libmachine: (auto-464146) DBG | Checking permissions on dir: /home/jenkins
	I0729 01:59:07.798233   59039 main.go:141] libmachine: (auto-464146) DBG | Checking permissions on dir: /home
	I0729 01:59:07.798240   59039 main.go:141] libmachine: (auto-464146) DBG | Skipping /home - not owner
	I0729 01:59:07.798285   59039 main.go:141] libmachine: (auto-464146) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube/machines (perms=drwxr-xr-x)
	I0729 01:59:07.798316   59039 main.go:141] libmachine: (auto-464146) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421/.minikube (perms=drwxr-xr-x)
	I0729 01:59:07.798328   59039 main.go:141] libmachine: (auto-464146) Setting executable bit set on /home/jenkins/minikube-integration/19312-9421 (perms=drwxrwxr-x)
	I0729 01:59:07.798342   59039 main.go:141] libmachine: (auto-464146) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 01:59:07.798352   59039 main.go:141] libmachine: (auto-464146) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 01:59:07.798365   59039 main.go:141] libmachine: (auto-464146) Creating domain...
	I0729 01:59:07.799596   59039 main.go:141] libmachine: (auto-464146) define libvirt domain using xml: 
	I0729 01:59:07.799622   59039 main.go:141] libmachine: (auto-464146) <domain type='kvm'>
	I0729 01:59:07.799643   59039 main.go:141] libmachine: (auto-464146)   <name>auto-464146</name>
	I0729 01:59:07.799657   59039 main.go:141] libmachine: (auto-464146)   <memory unit='MiB'>3072</memory>
	I0729 01:59:07.799683   59039 main.go:141] libmachine: (auto-464146)   <vcpu>2</vcpu>
	I0729 01:59:07.799700   59039 main.go:141] libmachine: (auto-464146)   <features>
	I0729 01:59:07.799706   59039 main.go:141] libmachine: (auto-464146)     <acpi/>
	I0729 01:59:07.799714   59039 main.go:141] libmachine: (auto-464146)     <apic/>
	I0729 01:59:07.799747   59039 main.go:141] libmachine: (auto-464146)     <pae/>
	I0729 01:59:07.799767   59039 main.go:141] libmachine: (auto-464146)     
	I0729 01:59:07.799778   59039 main.go:141] libmachine: (auto-464146)   </features>
	I0729 01:59:07.799798   59039 main.go:141] libmachine: (auto-464146)   <cpu mode='host-passthrough'>
	I0729 01:59:07.799806   59039 main.go:141] libmachine: (auto-464146)   
	I0729 01:59:07.799814   59039 main.go:141] libmachine: (auto-464146)   </cpu>
	I0729 01:59:07.799821   59039 main.go:141] libmachine: (auto-464146)   <os>
	I0729 01:59:07.799827   59039 main.go:141] libmachine: (auto-464146)     <type>hvm</type>
	I0729 01:59:07.799834   59039 main.go:141] libmachine: (auto-464146)     <boot dev='cdrom'/>
	I0729 01:59:07.799848   59039 main.go:141] libmachine: (auto-464146)     <boot dev='hd'/>
	I0729 01:59:07.799860   59039 main.go:141] libmachine: (auto-464146)     <bootmenu enable='no'/>
	I0729 01:59:07.799869   59039 main.go:141] libmachine: (auto-464146)   </os>
	I0729 01:59:07.799880   59039 main.go:141] libmachine: (auto-464146)   <devices>
	I0729 01:59:07.799890   59039 main.go:141] libmachine: (auto-464146)     <disk type='file' device='cdrom'>
	I0729 01:59:07.799903   59039 main.go:141] libmachine: (auto-464146)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/auto-464146/boot2docker.iso'/>
	I0729 01:59:07.799917   59039 main.go:141] libmachine: (auto-464146)       <target dev='hdc' bus='scsi'/>
	I0729 01:59:07.799929   59039 main.go:141] libmachine: (auto-464146)       <readonly/>
	I0729 01:59:07.799951   59039 main.go:141] libmachine: (auto-464146)     </disk>
	I0729 01:59:07.799965   59039 main.go:141] libmachine: (auto-464146)     <disk type='file' device='disk'>
	I0729 01:59:07.799977   59039 main.go:141] libmachine: (auto-464146)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 01:59:07.799992   59039 main.go:141] libmachine: (auto-464146)       <source file='/home/jenkins/minikube-integration/19312-9421/.minikube/machines/auto-464146/auto-464146.rawdisk'/>
	I0729 01:59:07.800006   59039 main.go:141] libmachine: (auto-464146)       <target dev='hda' bus='virtio'/>
	I0729 01:59:07.800017   59039 main.go:141] libmachine: (auto-464146)     </disk>
	I0729 01:59:07.800025   59039 main.go:141] libmachine: (auto-464146)     <interface type='network'>
	I0729 01:59:07.800037   59039 main.go:141] libmachine: (auto-464146)       <source network='mk-auto-464146'/>
	I0729 01:59:07.800047   59039 main.go:141] libmachine: (auto-464146)       <model type='virtio'/>
	I0729 01:59:07.800054   59039 main.go:141] libmachine: (auto-464146)     </interface>
	I0729 01:59:07.800065   59039 main.go:141] libmachine: (auto-464146)     <interface type='network'>
	I0729 01:59:07.800082   59039 main.go:141] libmachine: (auto-464146)       <source network='default'/>
	I0729 01:59:07.800097   59039 main.go:141] libmachine: (auto-464146)       <model type='virtio'/>
	I0729 01:59:07.800107   59039 main.go:141] libmachine: (auto-464146)     </interface>
	I0729 01:59:07.800113   59039 main.go:141] libmachine: (auto-464146)     <serial type='pty'>
	I0729 01:59:07.800121   59039 main.go:141] libmachine: (auto-464146)       <target port='0'/>
	I0729 01:59:07.800131   59039 main.go:141] libmachine: (auto-464146)     </serial>
	I0729 01:59:07.800139   59039 main.go:141] libmachine: (auto-464146)     <console type='pty'>
	I0729 01:59:07.800150   59039 main.go:141] libmachine: (auto-464146)       <target type='serial' port='0'/>
	I0729 01:59:07.800163   59039 main.go:141] libmachine: (auto-464146)     </console>
	I0729 01:59:07.800173   59039 main.go:141] libmachine: (auto-464146)     <rng model='virtio'>
	I0729 01:59:07.800205   59039 main.go:141] libmachine: (auto-464146)       <backend model='random'>/dev/random</backend>
	I0729 01:59:07.800228   59039 main.go:141] libmachine: (auto-464146)     </rng>
	I0729 01:59:07.800254   59039 main.go:141] libmachine: (auto-464146)     
	I0729 01:59:07.800265   59039 main.go:141] libmachine: (auto-464146)     
	I0729 01:59:07.800283   59039 main.go:141] libmachine: (auto-464146)   </devices>
	I0729 01:59:07.800298   59039 main.go:141] libmachine: (auto-464146) </domain>
	I0729 01:59:07.800312   59039 main.go:141] libmachine: (auto-464146) 
	I0729 01:59:07.804552   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:9e:15:00 in network default
	I0729 01:59:07.805125   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:07.805165   59039 main.go:141] libmachine: (auto-464146) Ensuring networks are active...
	I0729 01:59:07.805812   59039 main.go:141] libmachine: (auto-464146) Ensuring network default is active
	I0729 01:59:07.806144   59039 main.go:141] libmachine: (auto-464146) Ensuring network mk-auto-464146 is active
	I0729 01:59:07.806653   59039 main.go:141] libmachine: (auto-464146) Getting domain xml...
	I0729 01:59:07.807397   59039 main.go:141] libmachine: (auto-464146) Creating domain...
	I0729 01:59:09.018191   59039 main.go:141] libmachine: (auto-464146) Waiting to get IP...
	I0729 01:59:09.018960   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:09.019417   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:09.019461   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:09.019416   59157 retry.go:31] will retry after 194.8096ms: waiting for machine to come up
	I0729 01:59:09.215873   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:09.216366   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:09.216396   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:09.216322   59157 retry.go:31] will retry after 242.431083ms: waiting for machine to come up
	I0729 01:59:09.461023   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:09.461558   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:09.461590   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:09.461473   59157 retry.go:31] will retry after 416.34467ms: waiting for machine to come up
	I0729 01:59:09.879015   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:09.879611   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:09.879636   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:09.879568   59157 retry.go:31] will retry after 555.162173ms: waiting for machine to come up
	I0729 01:59:10.436035   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:10.436519   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:10.436549   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:10.436465   59157 retry.go:31] will retry after 499.35339ms: waiting for machine to come up
	I0729 01:59:10.937167   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:10.937668   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:10.937690   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:10.937631   59157 retry.go:31] will retry after 802.525274ms: waiting for machine to come up
	I0729 01:59:14.119322   58942 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.582282318s)
	I0729 01:59:14.119361   58942 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 01:59:14.119412   58942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 01:59:14.124545   58942 start.go:563] Will wait 60s for crictl version
	I0729 01:59:14.124605   58942 ssh_runner.go:195] Run: which crictl
	I0729 01:59:14.128484   58942 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 01:59:14.168118   58942 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 01:59:14.168203   58942 ssh_runner.go:195] Run: crio --version
	I0729 01:59:14.200449   58942 ssh_runner.go:195] Run: crio --version
	I0729 01:59:14.232801   58942 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 01:59:14.234159   58942 main.go:141] libmachine: (pause-112077) Calling .GetIP
	I0729 01:59:14.237240   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:14.237641   58942 main.go:141] libmachine: (pause-112077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:6f:bd", ip: ""} in network mk-pause-112077: {Iface:virbr3 ExpiryTime:2024-07-29 02:57:40 +0000 UTC Type:0 Mac:52:54:00:40:6f:bd Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:pause-112077 Clientid:01:52:54:00:40:6f:bd}
	I0729 01:59:14.237665   58942 main.go:141] libmachine: (pause-112077) DBG | domain pause-112077 has defined IP address 192.168.39.22 and MAC address 52:54:00:40:6f:bd in network mk-pause-112077
	I0729 01:59:14.237893   58942 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 01:59:14.242341   58942 kubeadm.go:883] updating cluster {Name:pause-112077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-112077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 01:59:14.242483   58942 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 01:59:14.242531   58942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:59:14.287217   58942 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 01:59:14.287245   58942 crio.go:433] Images already preloaded, skipping extraction
	I0729 01:59:14.287300   58942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 01:59:14.323695   58942 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 01:59:14.323718   58942 cache_images.go:84] Images are preloaded, skipping loading
	I0729 01:59:14.323728   58942 kubeadm.go:934] updating node { 192.168.39.22 8443 v1.30.3 crio true true} ...
	I0729 01:59:14.323855   58942 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-112077 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-112077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 01:59:14.323942   58942 ssh_runner.go:195] Run: crio config
	I0729 01:59:14.374595   58942 cni.go:84] Creating CNI manager for ""
	I0729 01:59:14.374621   58942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 01:59:14.374632   58942 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 01:59:14.374651   58942 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-112077 NodeName:pause-112077 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 01:59:14.374797   58942 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-112077"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 01:59:14.374857   58942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 01:59:14.386277   58942 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 01:59:14.386344   58942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 01:59:14.396825   58942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0729 01:59:14.414514   58942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 01:59:14.432141   58942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 01:59:14.450392   58942 ssh_runner.go:195] Run: grep 192.168.39.22	control-plane.minikube.internal$ /etc/hosts
	I0729 01:59:14.454825   58942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 01:59:14.594288   58942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 01:59:14.610451   58942 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077 for IP: 192.168.39.22
	I0729 01:59:14.610477   58942 certs.go:194] generating shared ca certs ...
	I0729 01:59:14.610499   58942 certs.go:226] acquiring lock for ca certs: {Name:mk38990dfcfc110385233f177adf374470c56ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 01:59:14.610669   58942 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key
	I0729 01:59:14.610731   58942 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key
	I0729 01:59:14.610744   58942 certs.go:256] generating profile certs ...
	I0729 01:59:14.610857   58942 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/client.key
	I0729 01:59:14.610946   58942 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/apiserver.key.f5507500
	I0729 01:59:14.610981   58942 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/proxy-client.key
	I0729 01:59:14.611118   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem (1338 bytes)
	W0729 01:59:14.611163   58942 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623_empty.pem, impossibly tiny 0 bytes
	I0729 01:59:14.611175   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 01:59:14.611200   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/ca.pem (1078 bytes)
	I0729 01:59:14.611221   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/cert.pem (1123 bytes)
	I0729 01:59:14.611240   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/certs/key.pem (1675 bytes)
	I0729 01:59:14.611283   58942 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem (1708 bytes)
	I0729 01:59:14.612635   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 01:59:14.639933   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 01:59:14.671306   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 01:59:14.696529   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 01:59:14.723633   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 01:59:14.750066   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 01:59:14.775261   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 01:59:14.800245   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/pause-112077/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 01:59:14.824108   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/ssl/certs/166232.pem --> /usr/share/ca-certificates/166232.pem (1708 bytes)
	I0729 01:59:14.848134   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 01:59:14.873158   58942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-9421/.minikube/certs/16623.pem --> /usr/share/ca-certificates/16623.pem (1338 bytes)
	I0729 01:59:14.935107   58942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 01:59:15.026387   58942 ssh_runner.go:195] Run: openssl version
	I0729 01:59:15.130264   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166232.pem && ln -fs /usr/share/ca-certificates/166232.pem /etc/ssl/certs/166232.pem"
	I0729 01:59:15.195680   58942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166232.pem
	I0729 01:59:15.224631   58942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 00:59 /usr/share/ca-certificates/166232.pem
	I0729 01:59:15.224712   58942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166232.pem
	I0729 01:59:15.281540   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166232.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 01:59:11.741524   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:11.742003   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:11.742033   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:11.741954   59157 retry.go:31] will retry after 1.01251303s: waiting for machine to come up
	I0729 01:59:12.756519   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:12.757011   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:12.757062   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:12.756950   59157 retry.go:31] will retry after 1.161433115s: waiting for machine to come up
	I0729 01:59:13.920033   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:13.920500   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:13.920530   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:13.920455   59157 retry.go:31] will retry after 1.356984409s: waiting for machine to come up
	I0729 01:59:15.278624   59039 main.go:141] libmachine: (auto-464146) DBG | domain auto-464146 has defined MAC address 52:54:00:33:2b:3e in network mk-auto-464146
	I0729 01:59:15.279068   59039 main.go:141] libmachine: (auto-464146) DBG | unable to find current IP address of domain auto-464146 in network mk-auto-464146
	I0729 01:59:15.279096   59039 main.go:141] libmachine: (auto-464146) DBG | I0729 01:59:15.278999   59157 retry.go:31] will retry after 1.811064228s: waiting for machine to come up
	I0729 01:59:19.224073   57807 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 01:59:19.224312   57807 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 01:59:15.324983   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 01:59:15.388784   58942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:59:15.397220   58942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 00:49 /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:59:15.397287   58942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 01:59:15.438073   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 01:59:15.472575   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16623.pem && ln -fs /usr/share/ca-certificates/16623.pem /etc/ssl/certs/16623.pem"
	I0729 01:59:15.515620   58942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16623.pem
	I0729 01:59:15.542374   58942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 00:59 /usr/share/ca-certificates/16623.pem
	I0729 01:59:15.542440   58942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16623.pem
	I0729 01:59:15.565637   58942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16623.pem /etc/ssl/certs/51391683.0"
	I0729 01:59:15.599468   58942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 01:59:15.616932   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 01:59:15.641159   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 01:59:15.654715   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 01:59:15.662431   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 01:59:15.679608   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 01:59:15.716800   58942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 01:59:15.752446   58942 kubeadm.go:392] StartCluster: {Name:pause-112077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-112077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:59:15.752546   58942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 01:59:15.752616   58942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 01:59:15.841118   58942 cri.go:89] found id: "9d73da1cfbd34155ca352d5d60df41bc58831aa78fffe4950273ff80b41afcc0"
	I0729 01:59:15.841142   58942 cri.go:89] found id: "85a42262e085870cb0271aeb77fa37cf2df79478b07b02a0e202030aa7841d9c"
	I0729 01:59:15.841149   58942 cri.go:89] found id: "8ae9f347b1aff4a099b4dca75aac9fe8fb72e4e291b52ee9c8a7c90949e11a35"
	I0729 01:59:15.841155   58942 cri.go:89] found id: "4385bc3017a2edbbeeb6961df651205eef4a8ada814e2ade19898ef4ec240209"
	I0729 01:59:15.841160   58942 cri.go:89] found id: "ccfb93cf0ded9005433905729425ef28b3eaf3c3b21e5e8b24486dac34ca2cf3"
	I0729 01:59:15.841165   58942 cri.go:89] found id: "e4d9f94c7295602523ac69bb831cc319589d7b7ffb759c0822d74a5f4dd4f111"
	I0729 01:59:15.841169   58942 cri.go:89] found id: "6b220db8847222b6aa66fb6db253b1090864c6f6b39d4af7370baedd227ac46f"
	I0729 01:59:15.841174   58942 cri.go:89] found id: "d6ad9b45cb0b70ae5153758bc6999651e9c4e36cd7a8952d9b5164cab11b0d8e"
	I0729 01:59:15.841179   58942 cri.go:89] found id: "0012d6373d33da8958ee44eb8a5a736bfdebdca4b4f8b302fb57d5c64fb0397e"
	I0729 01:59:15.841188   58942 cri.go:89] found id: "f34eddb15984881aacb430691d7f4d407a4e08e554b8df639f4ecdde23f8c561"
	I0729 01:59:15.841192   58942 cri.go:89] found id: ""
	I0729 01:59:15.841248   58942 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.460308909Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722218405460279905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e1f5091-edf7-4f9b-8001-0ee9a1399834 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.460995775Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64de2a17-d787-4198-81bd-01a7c675f543 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.461067866Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64de2a17-d787-4198-81bd-01a7c675f543 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.461391155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7cb97813bee570d4515e57d507a3c09e62757876fe536363a3f14b3262d8f568,PodSandboxId:7155380eee197110aa1080df536f0fed89f4cf1b72deb6c83c4981330cef6feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722218378900906750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e233f10ca65887d6f7104393588a521b,},Annotations:map[string]string{io.kubernetes.container.hash: d0866b8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee3b6f7935d7110fe5162cfb681e14f2b52f10c0f0df7e11ed5863c710e7424,PodSandboxId:9625bb8069e785ba3105802d07ca42e2b4db572b8bb4d497c3bbc517afdb82e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722218378876703189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee17b01eb59302a56a478e8f065fe54,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a7c3c5f8ebd391af7f184e6fb1a61f0346608de8c1d8c83a4621443130db53,PodSandboxId:f6fd9e5342ad93d73672d00fbd514d20338707b975f83aa6e6abe70ac662e382,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722218378851412305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0214a0633ffd83097e82bb4653d76e15,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b681f3330bed484ab256fb5cdb2d69276102895fc24c7984cd8298d3e48d63,PodSandboxId:b7c26063b3953be16b16695c3eebdf9f9914dfdffa3d889f80f29e974c5889d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722218376776796057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d466c7c5637c35513d90103d11d837ec,},Annotations:map[string]string{io.kubernetes.container.hash: 82629e81,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6577660f08ced09a77d6f9c12a6fe589132ad5be89e697d6db7f740a6c16e4,PodSandboxId:44ffcdb427f6e20d7ea417bae9607e5255b28f3bf9678baebbcd2bc004f4ce28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722218371776491915,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6zq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdb95840,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a65d7355efd21be160e5a3dfca198433a37d8ef776a86984538859be542035,PodSandboxId:8eb5c1b53ab2616e65a12dbdc859e6e5346ead662b1e38bf340e823f1b1389c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722218356143375139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2krfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 709db69f-5c21-49dd-b30d-3395f0043e30,},Annotations:map[string]string{io.kubernetes.container.hash: e59a1d27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b922622ee312381369aaa446b5c7db776cc251adbcb6b34e565512671f9e50,PodSandboxId:44ffcdb427f6e20d7ea417bae9607e5255b28f3bf9678baebbcd2bc004f4ce28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722218355418910612,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6zq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdb958
40,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d73da1cfbd34155ca352d5d60df41bc58831aa78fffe4950273ff80b41afcc0,PodSandboxId:b7c26063b3953be16b16695c3eebdf9f9914dfdffa3d889f80f29e974c5889d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722218355412355999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d466c7c5637c35513d90103d11d837ec,},Annotations:map[string]string{io.kubernetes.container.hash: 82629e81,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a42262e085870cb0271aeb77fa37cf2df79478b07b02a0e202030aa7841d9c,PodSandboxId:9625bb8069e785ba3105802d07ca42e2b4db572b8bb4d497c3bbc517afdb82e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722218355350158413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee17b01eb59302a56a478e8f065fe54,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae9f347b1aff4a099b4dca75aac9fe8fb72e4e291b52ee9c8a7c90949e11a35,PodSandboxId:f6fd9e5342ad93d73672d00fbd514d20338707b975f83aa6e6abe70ac662e382,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722218355267881686,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0214a0633ffd83097e82bb4653d76e15,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4385bc3017a2edbbeeb6961df651205eef4a8ada814e2ade19898ef4ec240209,PodSandboxId:7155380eee197110aa1080df536f0fed89f4cf1b72deb6c83c4981330cef6feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722218355241991658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e233f10ca65887d6f7104393588a521b,},Annotations:map[string]string{io.kubernetes.container.hash: d0866b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccfb93cf0ded9005433905729425ef28b3eaf3c3b21e5e8b24486dac34ca2cf3,PodSandboxId:d9b1baebff90b336501c86c7f696de67dd1b64fdde9ef4c020e00cc97edc3d02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722218300133224421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2krfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 709db69f-5c21-49dd-b30d-3395f0043e30,},Annotations:map[string]string{io.kubernetes.container.hash: e59a1d27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64de2a17-d787-4198-81bd-01a7c675f543 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.509316446Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ce2bec3-2695-433b-801a-dc021c91449e name=/runtime.v1.RuntimeService/Version
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.509448675Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ce2bec3-2695-433b-801a-dc021c91449e name=/runtime.v1.RuntimeService/Version
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.510508296Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4213275b-4c40-4b93-8792-9bbdb7d1f287 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.511176408Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722218405511142736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4213275b-4c40-4b93-8792-9bbdb7d1f287 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.512157976Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79ce513d-6094-4c36-861d-fb143749657f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.512211408Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79ce513d-6094-4c36-861d-fb143749657f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.512473130Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7cb97813bee570d4515e57d507a3c09e62757876fe536363a3f14b3262d8f568,PodSandboxId:7155380eee197110aa1080df536f0fed89f4cf1b72deb6c83c4981330cef6feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722218378900906750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e233f10ca65887d6f7104393588a521b,},Annotations:map[string]string{io.kubernetes.container.hash: d0866b8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee3b6f7935d7110fe5162cfb681e14f2b52f10c0f0df7e11ed5863c710e7424,PodSandboxId:9625bb8069e785ba3105802d07ca42e2b4db572b8bb4d497c3bbc517afdb82e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722218378876703189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee17b01eb59302a56a478e8f065fe54,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a7c3c5f8ebd391af7f184e6fb1a61f0346608de8c1d8c83a4621443130db53,PodSandboxId:f6fd9e5342ad93d73672d00fbd514d20338707b975f83aa6e6abe70ac662e382,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722218378851412305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0214a0633ffd83097e82bb4653d76e15,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b681f3330bed484ab256fb5cdb2d69276102895fc24c7984cd8298d3e48d63,PodSandboxId:b7c26063b3953be16b16695c3eebdf9f9914dfdffa3d889f80f29e974c5889d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722218376776796057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d466c7c5637c35513d90103d11d837ec,},Annotations:map[string]string{io.kubernetes.container.hash: 82629e81,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6577660f08ced09a77d6f9c12a6fe589132ad5be89e697d6db7f740a6c16e4,PodSandboxId:44ffcdb427f6e20d7ea417bae9607e5255b28f3bf9678baebbcd2bc004f4ce28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722218371776491915,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6zq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdb95840,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a65d7355efd21be160e5a3dfca198433a37d8ef776a86984538859be542035,PodSandboxId:8eb5c1b53ab2616e65a12dbdc859e6e5346ead662b1e38bf340e823f1b1389c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722218356143375139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2krfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 709db69f-5c21-49dd-b30d-3395f0043e30,},Annotations:map[string]string{io.kubernetes.container.hash: e59a1d27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b922622ee312381369aaa446b5c7db776cc251adbcb6b34e565512671f9e50,PodSandboxId:44ffcdb427f6e20d7ea417bae9607e5255b28f3bf9678baebbcd2bc004f4ce28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722218355418910612,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6zq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdb958
40,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d73da1cfbd34155ca352d5d60df41bc58831aa78fffe4950273ff80b41afcc0,PodSandboxId:b7c26063b3953be16b16695c3eebdf9f9914dfdffa3d889f80f29e974c5889d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722218355412355999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d466c7c5637c35513d90103d11d837ec,},Annotations:map[string]string{io.kubernetes.container.hash: 82629e81,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a42262e085870cb0271aeb77fa37cf2df79478b07b02a0e202030aa7841d9c,PodSandboxId:9625bb8069e785ba3105802d07ca42e2b4db572b8bb4d497c3bbc517afdb82e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722218355350158413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee17b01eb59302a56a478e8f065fe54,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae9f347b1aff4a099b4dca75aac9fe8fb72e4e291b52ee9c8a7c90949e11a35,PodSandboxId:f6fd9e5342ad93d73672d00fbd514d20338707b975f83aa6e6abe70ac662e382,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722218355267881686,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0214a0633ffd83097e82bb4653d76e15,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4385bc3017a2edbbeeb6961df651205eef4a8ada814e2ade19898ef4ec240209,PodSandboxId:7155380eee197110aa1080df536f0fed89f4cf1b72deb6c83c4981330cef6feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722218355241991658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e233f10ca65887d6f7104393588a521b,},Annotations:map[string]string{io.kubernetes.container.hash: d0866b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccfb93cf0ded9005433905729425ef28b3eaf3c3b21e5e8b24486dac34ca2cf3,PodSandboxId:d9b1baebff90b336501c86c7f696de67dd1b64fdde9ef4c020e00cc97edc3d02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722218300133224421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2krfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 709db69f-5c21-49dd-b30d-3395f0043e30,},Annotations:map[string]string{io.kubernetes.container.hash: e59a1d27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79ce513d-6094-4c36-861d-fb143749657f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.560615622Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16f4e271-36e7-4cda-9fde-3d3771d3b5e5 name=/runtime.v1.RuntimeService/Version
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.560693102Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16f4e271-36e7-4cda-9fde-3d3771d3b5e5 name=/runtime.v1.RuntimeService/Version
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.562222635Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=882e7cca-c209-4bd9-b8c8-d37d847247ad name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.563038044Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722218405562912435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=882e7cca-c209-4bd9-b8c8-d37d847247ad name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.563808780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de2fa6b7-08d1-4926-8121-c35187287c9a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.563860348Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de2fa6b7-08d1-4926-8121-c35187287c9a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.564181446Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7cb97813bee570d4515e57d507a3c09e62757876fe536363a3f14b3262d8f568,PodSandboxId:7155380eee197110aa1080df536f0fed89f4cf1b72deb6c83c4981330cef6feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722218378900906750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e233f10ca65887d6f7104393588a521b,},Annotations:map[string]string{io.kubernetes.container.hash: d0866b8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee3b6f7935d7110fe5162cfb681e14f2b52f10c0f0df7e11ed5863c710e7424,PodSandboxId:9625bb8069e785ba3105802d07ca42e2b4db572b8bb4d497c3bbc517afdb82e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722218378876703189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee17b01eb59302a56a478e8f065fe54,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a7c3c5f8ebd391af7f184e6fb1a61f0346608de8c1d8c83a4621443130db53,PodSandboxId:f6fd9e5342ad93d73672d00fbd514d20338707b975f83aa6e6abe70ac662e382,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722218378851412305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0214a0633ffd83097e82bb4653d76e15,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b681f3330bed484ab256fb5cdb2d69276102895fc24c7984cd8298d3e48d63,PodSandboxId:b7c26063b3953be16b16695c3eebdf9f9914dfdffa3d889f80f29e974c5889d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722218376776796057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d466c7c5637c35513d90103d11d837ec,},Annotations:map[string]string{io.kubernetes.container.hash: 82629e81,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6577660f08ced09a77d6f9c12a6fe589132ad5be89e697d6db7f740a6c16e4,PodSandboxId:44ffcdb427f6e20d7ea417bae9607e5255b28f3bf9678baebbcd2bc004f4ce28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722218371776491915,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6zq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdb95840,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a65d7355efd21be160e5a3dfca198433a37d8ef776a86984538859be542035,PodSandboxId:8eb5c1b53ab2616e65a12dbdc859e6e5346ead662b1e38bf340e823f1b1389c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722218356143375139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2krfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 709db69f-5c21-49dd-b30d-3395f0043e30,},Annotations:map[string]string{io.kubernetes.container.hash: e59a1d27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b922622ee312381369aaa446b5c7db776cc251adbcb6b34e565512671f9e50,PodSandboxId:44ffcdb427f6e20d7ea417bae9607e5255b28f3bf9678baebbcd2bc004f4ce28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722218355418910612,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6zq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdb958
40,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d73da1cfbd34155ca352d5d60df41bc58831aa78fffe4950273ff80b41afcc0,PodSandboxId:b7c26063b3953be16b16695c3eebdf9f9914dfdffa3d889f80f29e974c5889d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722218355412355999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d466c7c5637c35513d90103d11d837ec,},Annotations:map[string]string{io.kubernetes.container.hash: 82629e81,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a42262e085870cb0271aeb77fa37cf2df79478b07b02a0e202030aa7841d9c,PodSandboxId:9625bb8069e785ba3105802d07ca42e2b4db572b8bb4d497c3bbc517afdb82e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722218355350158413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee17b01eb59302a56a478e8f065fe54,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae9f347b1aff4a099b4dca75aac9fe8fb72e4e291b52ee9c8a7c90949e11a35,PodSandboxId:f6fd9e5342ad93d73672d00fbd514d20338707b975f83aa6e6abe70ac662e382,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722218355267881686,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0214a0633ffd83097e82bb4653d76e15,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4385bc3017a2edbbeeb6961df651205eef4a8ada814e2ade19898ef4ec240209,PodSandboxId:7155380eee197110aa1080df536f0fed89f4cf1b72deb6c83c4981330cef6feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722218355241991658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e233f10ca65887d6f7104393588a521b,},Annotations:map[string]string{io.kubernetes.container.hash: d0866b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccfb93cf0ded9005433905729425ef28b3eaf3c3b21e5e8b24486dac34ca2cf3,PodSandboxId:d9b1baebff90b336501c86c7f696de67dd1b64fdde9ef4c020e00cc97edc3d02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722218300133224421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2krfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 709db69f-5c21-49dd-b30d-3395f0043e30,},Annotations:map[string]string{io.kubernetes.container.hash: e59a1d27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de2fa6b7-08d1-4926-8121-c35187287c9a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.626048793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dede10dc-1d90-4f60-b53f-6fe2a89db0f4 name=/runtime.v1.RuntimeService/Version
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.626177906Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dede10dc-1d90-4f60-b53f-6fe2a89db0f4 name=/runtime.v1.RuntimeService/Version
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.627677483Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=30540a4d-37c7-4595-a134-e7b64f4f8acf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.628639253Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722218405628608062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30540a4d-37c7-4595-a134-e7b64f4f8acf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.629552675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79ca28ac-4ea3-4ca3-b2a0-7f2a7e9ef22a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.629624514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79ca28ac-4ea3-4ca3-b2a0-7f2a7e9ef22a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 02:00:05 pause-112077 crio[2455]: time="2024-07-29 02:00:05.630075837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7cb97813bee570d4515e57d507a3c09e62757876fe536363a3f14b3262d8f568,PodSandboxId:7155380eee197110aa1080df536f0fed89f4cf1b72deb6c83c4981330cef6feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722218378900906750,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e233f10ca65887d6f7104393588a521b,},Annotations:map[string]string{io.kubernetes.container.hash: d0866b8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee3b6f7935d7110fe5162cfb681e14f2b52f10c0f0df7e11ed5863c710e7424,PodSandboxId:9625bb8069e785ba3105802d07ca42e2b4db572b8bb4d497c3bbc517afdb82e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722218378876703189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee17b01eb59302a56a478e8f065fe54,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2a7c3c5f8ebd391af7f184e6fb1a61f0346608de8c1d8c83a4621443130db53,PodSandboxId:f6fd9e5342ad93d73672d00fbd514d20338707b975f83aa6e6abe70ac662e382,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722218378851412305,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0214a0633ffd83097e82bb4653d76e15,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b681f3330bed484ab256fb5cdb2d69276102895fc24c7984cd8298d3e48d63,PodSandboxId:b7c26063b3953be16b16695c3eebdf9f9914dfdffa3d889f80f29e974c5889d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722218376776796057,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d466c7c5637c35513d90103d11d837ec,},Annotations:map[string]string{io.kubernetes.container.hash: 82629e81,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b6577660f08ced09a77d6f9c12a6fe589132ad5be89e697d6db7f740a6c16e4,PodSandboxId:44ffcdb427f6e20d7ea417bae9607e5255b28f3bf9678baebbcd2bc004f4ce28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722218371776491915,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6zq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdb95840,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a65d7355efd21be160e5a3dfca198433a37d8ef776a86984538859be542035,PodSandboxId:8eb5c1b53ab2616e65a12dbdc859e6e5346ead662b1e38bf340e823f1b1389c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722218356143375139,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2krfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 709db69f-5c21-49dd-b30d-3395f0043e30,},Annotations:map[string]string{io.kubernetes.container.hash: e59a1d27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74b922622ee312381369aaa446b5c7db776cc251adbcb6b34e565512671f9e50,PodSandboxId:44ffcdb427f6e20d7ea417bae9607e5255b28f3bf9678baebbcd2bc004f4ce28,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722218355418910612,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m6zq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c,},Annotations:map[string]string{io.kubernetes.container.hash: cdb958
40,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d73da1cfbd34155ca352d5d60df41bc58831aa78fffe4950273ff80b41afcc0,PodSandboxId:b7c26063b3953be16b16695c3eebdf9f9914dfdffa3d889f80f29e974c5889d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722218355412355999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d466c7c5637c35513d90103d11d837ec,},Annotations:map[string]string{io.kubernetes.container.hash: 82629e81,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a42262e085870cb0271aeb77fa37cf2df79478b07b02a0e202030aa7841d9c,PodSandboxId:9625bb8069e785ba3105802d07ca42e2b4db572b8bb4d497c3bbc517afdb82e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722218355350158413,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee17b01eb59302a56a478e8f065fe54,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae9f347b1aff4a099b4dca75aac9fe8fb72e4e291b52ee9c8a7c90949e11a35,PodSandboxId:f6fd9e5342ad93d73672d00fbd514d20338707b975f83aa6e6abe70ac662e382,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722218355267881686,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0214a0633ffd83097e82bb4653d76e15,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4385bc3017a2edbbeeb6961df651205eef4a8ada814e2ade19898ef4ec240209,PodSandboxId:7155380eee197110aa1080df536f0fed89f4cf1b72deb6c83c4981330cef6feb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722218355241991658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e233f10ca65887d6f7104393588a521b,},Annotations:map[string]string{io.kubernetes.container.hash: d0866b8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccfb93cf0ded9005433905729425ef28b3eaf3c3b21e5e8b24486dac34ca2cf3,PodSandboxId:d9b1baebff90b336501c86c7f696de67dd1b64fdde9ef4c020e00cc97edc3d02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722218300133224421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2krfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 709db69f-5c21-49dd-b30d-3395f0043e30,},Annotations:map[string]string{io.kubernetes.container.hash: e59a1d27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79ca28ac-4ea3-4ca3-b2a0-7f2a7e9ef22a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7cb97813bee57       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   26 seconds ago       Running             kube-apiserver            2                   7155380eee197       kube-apiserver-pause-112077
	7ee3b6f7935d7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   26 seconds ago       Running             kube-scheduler            2                   9625bb8069e78       kube-scheduler-pause-112077
	e2a7c3c5f8ebd       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   26 seconds ago       Running             kube-controller-manager   2                   f6fd9e5342ad9       kube-controller-manager-pause-112077
	27b681f3330be       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   28 seconds ago       Running             etcd                      2                   b7c26063b3953       etcd-pause-112077
	4b6577660f08c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   33 seconds ago       Running             kube-proxy                2                   44ffcdb427f6e       kube-proxy-m6zq2
	e3a65d7355efd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   49 seconds ago       Running             coredns                   1                   8eb5c1b53ab26       coredns-7db6d8ff4d-2krfb
	74b922622ee31       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   50 seconds ago       Exited              kube-proxy                1                   44ffcdb427f6e       kube-proxy-m6zq2
	9d73da1cfbd34       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   50 seconds ago       Exited              etcd                      1                   b7c26063b3953       etcd-pause-112077
	85a42262e0858       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   50 seconds ago       Exited              kube-scheduler            1                   9625bb8069e78       kube-scheduler-pause-112077
	8ae9f347b1aff       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   50 seconds ago       Exited              kube-controller-manager   1                   f6fd9e5342ad9       kube-controller-manager-pause-112077
	4385bc3017a2e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   50 seconds ago       Exited              kube-apiserver            1                   7155380eee197       kube-apiserver-pause-112077
	ccfb93cf0ded9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   d9b1baebff90b       coredns-7db6d8ff4d-2krfb
	
	
	==> coredns [ccfb93cf0ded9005433905729425ef28b3eaf3c3b21e5e8b24486dac34ca2cf3] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1200011102]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:58:20.792) (total time: 30003ms):
	Trace[1200011102]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (01:58:50.796)
	Trace[1200011102]: [30.003945038s] [30.003945038s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1289154887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:58:20.792) (total time: 30004ms):
	Trace[1289154887]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (01:58:50.795)
	Trace[1289154887]: [30.004481672s] [30.004481672s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1566846437]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:58:20.794) (total time: 30002ms):
	Trace[1566846437]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (01:58:50.796)
	Trace[1566846437]: [30.002604991s] [30.002604991s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:42162 - 39328 "HINFO IN 3105883890315905101.6052643386566175906. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013614543s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e3a65d7355efd21be160e5a3dfca198433a37d8ef776a86984538859be542035] <==
	Trace[1917326646]: [10.001535742s] [10.001535742s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[2086429211]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:59:16.615) (total time: 10001ms):
	Trace[2086429211]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (01:59:26.616)
	Trace[2086429211]: [10.001705831s] [10.001705831s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1173472798]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:59:16.619) (total time: 10007ms):
	Trace[1173472798]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10007ms (01:59:26.626)
	Trace[1173472798]: [10.007268815s] [10.007268815s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:59600->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[974364603]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:59:27.573) (total time: 10164ms):
	Trace[974364603]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:59600->10.96.0.1:443: read: connection reset by peer 10164ms (01:59:37.737)
	Trace[974364603]: [10.164490562s] [10.164490562s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:59600->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:59616->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:59616->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:59614->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1032750168]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 01:59:27.693) (total time: 10044ms):
	Trace[1032750168]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:59614->10.96.0.1:443: read: connection reset by peer 10044ms (01:59:37.738)
	Trace[1032750168]: [10.044967684s] [10.044967684s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:59614->10.96.0.1:443: read: connection reset by peer
	
	
	==> describe nodes <==
	Name:               pause-112077
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-112077
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=608d90af2517e2ec0044e62b20376f40276621a1
	                    minikube.k8s.io/name=pause-112077
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T01_58_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 01:58:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-112077
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 02:00:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 01:59:42 +0000   Mon, 29 Jul 2024 01:57:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 01:59:42 +0000   Mon, 29 Jul 2024 01:57:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 01:59:42 +0000   Mon, 29 Jul 2024 01:57:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 01:59:42 +0000   Mon, 29 Jul 2024 01:58:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    pause-112077
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5480a6146014bd185ab01e674c9d5a1
	  System UUID:                f5480a61-4601-4bd1-85ab-01e674c9d5a1
	  Boot ID:                    6344c9eb-2544-4631-a978-91179b4d3a14
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2krfb                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     108s
	  kube-system                 etcd-pause-112077                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-apiserver-pause-112077             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-controller-manager-pause-112077    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-proxy-m6zq2                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-scheduler-pause-112077             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 23s                  kube-proxy       
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  2m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m7s (x7 over 2m8s)  kubelet          Node pause-112077 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m8s)  kubelet          Node pause-112077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m8s)  kubelet          Node pause-112077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s                 kubelet          Node pause-112077 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m2s                 kubelet          Node pause-112077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s                 kubelet          Node pause-112077 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  NodeReady                2m1s                 kubelet          Node pause-112077 status is now: NodeReady
	  Normal  RegisteredNode           109s                 node-controller  Node pause-112077 event: Registered Node pause-112077 in Controller
	  Normal  Starting                 27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)    kubelet          Node pause-112077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)    kubelet          Node pause-112077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)    kubelet          Node pause-112077 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                  node-controller  Node pause-112077 event: Registered Node pause-112077 in Controller
	
	
	==> dmesg <==
	[  +0.062640] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074406] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.165903] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.135502] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.309431] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.514993] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.070540] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.137350] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.957399] kauditd_printk_skb: 57 callbacks suppressed
	[Jul29 01:58] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.085426] kauditd_printk_skb: 30 callbacks suppressed
	[  +7.004586] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.343691] systemd-fstab-generator[1502]: Ignoring "noauto" option for root device
	[ +12.985996] kauditd_printk_skb: 89 callbacks suppressed
	[Jul29 01:59] systemd-fstab-generator[2374]: Ignoring "noauto" option for root device
	[  +0.138890] systemd-fstab-generator[2386]: Ignoring "noauto" option for root device
	[  +0.167015] systemd-fstab-generator[2400]: Ignoring "noauto" option for root device
	[  +0.133439] systemd-fstab-generator[2412]: Ignoring "noauto" option for root device
	[  +0.281950] systemd-fstab-generator[2440]: Ignoring "noauto" option for root device
	[  +6.062575] systemd-fstab-generator[2566]: Ignoring "noauto" option for root device
	[  +0.076163] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.548544] kauditd_printk_skb: 87 callbacks suppressed
	[ +10.961664] systemd-fstab-generator[3392]: Ignoring "noauto" option for root device
	[  +4.541296] kauditd_printk_skb: 38 callbacks suppressed
	[ +15.794043] systemd-fstab-generator[3738]: Ignoring "noauto" option for root device
	
	
	==> etcd [27b681f3330bed484ab256fb5cdb2d69276102895fc24c7984cd8298d3e48d63] <==
	{"level":"warn","ts":"2024-07-29T01:59:42.764887Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T01:59:42.461569Z","time spent":"303.316761ms","remote":"127.0.0.1:57028","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-07-29T01:59:42.764315Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"302.820431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T01:59:42.765441Z","caller":"traceutil/trace.go:171","msg":"trace[2010001286] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:0; response_revision:441; }","duration":"303.944476ms","start":"2024-07-29T01:59:42.461486Z","end":"2024-07-29T01:59:42.76543Z","steps":["trace[2010001286] 'range keys from in-memory index tree'  (duration: 302.744772ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T01:59:42.765479Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T01:59:42.461466Z","time spent":"303.992282ms","remote":"127.0.0.1:56512","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":29,"request content":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" "}
	{"level":"warn","ts":"2024-07-29T01:59:42.765004Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.636924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2024-07-29T01:59:42.765682Z","caller":"traceutil/trace.go:171","msg":"trace[539742882] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:441; }","duration":"191.318085ms","start":"2024-07-29T01:59:42.57434Z","end":"2024-07-29T01:59:42.765658Z","steps":["trace[539742882] 'agreement among raft nodes before linearized reading'  (duration: 190.561777ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T01:59:42.765048Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.657255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2024-07-29T01:59:42.765814Z","caller":"traceutil/trace.go:171","msg":"trace[2144773494] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:441; }","duration":"191.447861ms","start":"2024-07-29T01:59:42.574358Z","end":"2024-07-29T01:59:42.765806Z","steps":["trace[2144773494] 'agreement among raft nodes before linearized reading'  (duration: 190.671088ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T01:59:43.114107Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.248747ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16526399720541034633 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-112077.17e68c7e7ef2a3e8\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-112077.17e68c7e7ef2a3e8\" value_size:462 lease:7303027683686258821 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T01:59:43.114419Z","caller":"traceutil/trace.go:171","msg":"trace[182135047] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"315.495584ms","start":"2024-07-29T01:59:42.798912Z","end":"2024-07-29T01:59:43.114407Z","steps":["trace[182135047] 'process raft request'  (duration: 315.455052ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T01:59:43.114595Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"344.253675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-2krfb\" ","response":"range_response_count:1 size:4729"}
	{"level":"info","ts":"2024-07-29T01:59:43.114691Z","caller":"traceutil/trace.go:171","msg":"trace[201774229] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-2krfb; range_end:; response_count:1; response_revision:444; }","duration":"344.402029ms","start":"2024-07-29T01:59:42.770266Z","end":"2024-07-29T01:59:43.114668Z","steps":["trace[201774229] 'agreement among raft nodes before linearized reading'  (duration: 344.188208ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T01:59:43.114742Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T01:59:42.770255Z","time spent":"344.477939ms","remote":"127.0.0.1:56696","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4753,"request content":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-2krfb\" "}
	{"level":"warn","ts":"2024-07-29T01:59:43.11497Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T01:59:42.798893Z","time spent":"315.595579ms","remote":"127.0.0.1:56696","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4546,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-112077\" mod_revision:303 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-112077\" value_size:4484 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-112077\" > >"}
	{"level":"info","ts":"2024-07-29T01:59:43.115004Z","caller":"traceutil/trace.go:171","msg":"trace[226788179] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"344.979719ms","start":"2024-07-29T01:59:42.770012Z","end":"2024-07-29T01:59:43.114992Z","steps":["trace[226788179] 'process raft request'  (duration: 125.425991ms)","trace[226788179] 'compare'  (duration: 218.133639ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T01:59:43.115117Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T01:59:42.769997Z","time spent":"345.091642ms","remote":"127.0.0.1:57028","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":534,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-112077.17e68c7e7ef2a3e8\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-112077.17e68c7e7ef2a3e8\" value_size:462 lease:7303027683686258821 >> failure:<>"}
	{"level":"info","ts":"2024-07-29T01:59:43.114424Z","caller":"traceutil/trace.go:171","msg":"trace[1471502505] linearizableReadLoop","detail":"{readStateIndex:467; appliedIndex:466; }","duration":"344.060015ms","start":"2024-07-29T01:59:42.770334Z","end":"2024-07-29T01:59:43.114394Z","steps":["trace[1471502505] 'read index received'  (duration: 125.113345ms)","trace[1471502505] 'applied index is now lower than readState.Index'  (duration: 218.944786ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T01:59:43.115363Z","caller":"traceutil/trace.go:171","msg":"trace[1823201464] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"326.984702ms","start":"2024-07-29T01:59:42.78837Z","end":"2024-07-29T01:59:43.115354Z","steps":["trace[1823201464] 'process raft request'  (duration: 325.91442ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T01:59:43.11544Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T01:59:42.78835Z","time spent":"327.060808ms","remote":"127.0.0.1:56688","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5412,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/minions/pause-112077\" mod_revision:415 > success:<request_put:<key:\"/registry/minions/pause-112077\" value_size:5374 >> failure:<request_range:<key:\"/registry/minions/pause-112077\" > >"}
	{"level":"warn","ts":"2024-07-29T01:59:43.115674Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"345.069097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-m6zq2\" ","response":"range_response_count:1 size:4590"}
	{"level":"info","ts":"2024-07-29T01:59:43.115719Z","caller":"traceutil/trace.go:171","msg":"trace[1646549402] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-m6zq2; range_end:; response_count:1; response_revision:444; }","duration":"345.131818ms","start":"2024-07-29T01:59:42.77058Z","end":"2024-07-29T01:59:43.115711Z","steps":["trace[1646549402] 'agreement among raft nodes before linearized reading'  (duration: 345.027062ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T01:59:43.115742Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T01:59:42.770565Z","time spent":"345.171117ms","remote":"127.0.0.1:56696","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":4614,"request content":"key:\"/registry/pods/kube-system/kube-proxy-m6zq2\" "}
	{"level":"info","ts":"2024-07-29T01:59:43.236776Z","caller":"traceutil/trace.go:171","msg":"trace[1756128147] linearizableReadLoop","detail":"{readStateIndex:470; appliedIndex:469; }","duration":"101.343222ms","start":"2024-07-29T01:59:43.135417Z","end":"2024-07-29T01:59:43.236761Z","steps":["trace[1756128147] 'read index received'  (duration: 94.395697ms)","trace[1756128147] 'applied index is now lower than readState.Index'  (duration: 6.947003ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T01:59:43.237325Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.892387ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" ","response":"range_response_count:53 size:37203"}
	{"level":"info","ts":"2024-07-29T01:59:43.23739Z","caller":"traceutil/trace.go:171","msg":"trace[192355523] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:53; response_revision:445; }","duration":"101.978648ms","start":"2024-07-29T01:59:43.135402Z","end":"2024-07-29T01:59:43.23738Z","steps":["trace[192355523] 'agreement among raft nodes before linearized reading'  (duration: 101.563556ms)"],"step_count":1}
	
	
	==> etcd [9d73da1cfbd34155ca352d5d60df41bc58831aa78fffe4950273ff80b41afcc0] <==
	{"level":"info","ts":"2024-07-29T01:59:15.884565Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"40.877829ms"}
	{"level":"info","ts":"2024-07-29T01:59:15.928053Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-29T01:59:15.989268Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"eaed0234649c774e","local-member-id":"cde0bb267fc4e559","commit-index":459}
	{"level":"info","ts":"2024-07-29T01:59:15.989605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-29T01:59:15.989749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 became follower at term 2"}
	{"level":"info","ts":"2024-07-29T01:59:15.989762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft cde0bb267fc4e559 [peers: [], term: 2, commit: 459, applied: 0, lastindex: 459, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-29T01:59:15.993192Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-29T01:59:16.019599Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":438}
	{"level":"info","ts":"2024-07-29T01:59:16.029752Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-29T01:59:16.036437Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"cde0bb267fc4e559","timeout":"7s"}
	{"level":"info","ts":"2024-07-29T01:59:16.037088Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"cde0bb267fc4e559"}
	{"level":"info","ts":"2024-07-29T01:59:16.037126Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"cde0bb267fc4e559","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-29T01:59:16.037436Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T01:59:16.037596Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T01:59:16.037635Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T01:59:16.037645Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T01:59:16.037891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 switched to configuration voters=(14835062946585175385)"}
	{"level":"info","ts":"2024-07-29T01:59:16.038435Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"eaed0234649c774e","local-member-id":"cde0bb267fc4e559","added-peer-id":"cde0bb267fc4e559","added-peer-peer-urls":["https://192.168.39.22:2380"]}
	{"level":"info","ts":"2024-07-29T01:59:16.038553Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"eaed0234649c774e","local-member-id":"cde0bb267fc4e559","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:59:16.03858Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T01:59:16.045076Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T01:59:16.045172Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2024-07-29T01:59:16.045476Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2024-07-29T01:59:16.0474Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"cde0bb267fc4e559","initial-advertise-peer-urls":["https://192.168.39.22:2380"],"listen-peer-urls":["https://192.168.39.22:2380"],"advertise-client-urls":["https://192.168.39.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T01:59:16.047433Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> kernel <==
	 02:00:06 up 2 min,  0 users,  load average: 0.58, 0.34, 0.13
	Linux pause-112077 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4385bc3017a2edbbeeb6961df651205eef4a8ada814e2ade19898ef4ec240209] <==
	I0729 01:59:16.153310       1 server.go:148] Version: v1.30.3
	I0729 01:59:16.153400       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0729 01:59:16.750119       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:16.750356       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0729 01:59:16.751552       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 01:59:16.758181       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 01:59:16.763122       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 01:59:16.763238       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 01:59:16.763436       1 instance.go:299] Using reconciler: lease
	W0729 01:59:16.765032       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:17.751170       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:17.751242       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:17.766595       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:19.492464       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:19.528874       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:19.549371       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:21.695452       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:22.378846       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:22.442362       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:26.081666       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:26.766074       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:27.323643       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:32.354141       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:32.494642       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 01:59:32.799131       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [7cb97813bee570d4515e57d507a3c09e62757876fe536363a3f14b3262d8f568] <==
	I0729 01:59:42.318115       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 01:59:42.320443       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 01:59:42.342706       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 01:59:42.346085       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 01:59:42.370114       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0729 01:59:42.766443       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 01:59:43.118088       1 trace.go:236] Trace[1720699490]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:29311928-7a74-415d-8c9b-d234238eb000,client:192.168.39.22,api-group:events.k8s.io,api-version:v1,name:,subresource:,namespace:default,protocol:HTTP/2.0,resource:events,scope:resource,url:/apis/events.k8s.io/v1/namespaces/default/events,user-agent:kube-proxy/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (29-Jul-2024 01:59:42.454) (total time: 663ms):
	Trace[1720699490]: ["Create etcd3" audit-id:29311928-7a74-415d-8c9b-d234238eb000,key:/events/default/pause-112077.17e68c7e7ef2a3e8,type:*core.Event,resource:events 662ms (01:59:42.455)
	Trace[1720699490]:  ---"TransformToStorage succeeded" 312ms (01:59:42.768)
	Trace[1720699490]:  ---"Txn call succeeded" 349ms (01:59:43.117)]
	Trace[1720699490]: [663.474629ms] [663.474629ms] END
	I0729 01:59:43.121011       1 trace.go:236] Trace[461850243]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:14b98d19-ea1e-42a0-9dc0-3adb7eb8939d,client:192.168.39.22,api-group:,api-version:v1,name:coredns,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/coredns/token,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (29-Jul-2024 01:59:42.572) (total time: 548ms):
	Trace[461850243]: ---"watchCache locked acquired" 545ms (01:59:43.118)
	Trace[461850243]: [548.331413ms] [548.331413ms] END
	I0729 01:59:43.122732       1 trace.go:236] Trace[1148024247]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:efa571ca-87ca-49f7-9624-3c1be483e0de,client:192.168.39.22,api-group:,api-version:v1,name:kube-proxy,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (29-Jul-2024 01:59:42.572) (total time: 550ms):
	Trace[1148024247]: ---"watchCache locked acquired" 545ms (01:59:43.118)
	Trace[1148024247]: [550.283395ms] [550.283395ms] END
	I0729 01:59:43.134326       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 01:59:44.128278       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 01:59:44.150407       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 01:59:44.193562       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 01:59:44.224802       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 01:59:44.234111       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 01:59:55.111563       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 01:59:55.260848       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8ae9f347b1aff4a099b4dca75aac9fe8fb72e4e291b52ee9c8a7c90949e11a35] <==
	I0729 01:59:16.952516       1 serving.go:380] Generated self-signed cert in-memory
	I0729 01:59:17.306274       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 01:59:17.306368       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:59:17.308045       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 01:59:17.308575       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 01:59:17.308587       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 01:59:17.308609       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [e2a7c3c5f8ebd391af7f184e6fb1a61f0346608de8c1d8c83a4621443130db53] <==
	I0729 01:59:55.116625       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0729 01:59:55.122754       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0729 01:59:55.134381       1 shared_informer.go:320] Caches are synced for PV protection
	I0729 01:59:55.137773       1 shared_informer.go:320] Caches are synced for taint
	I0729 01:59:55.137863       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 01:59:55.138073       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 01:59:55.138360       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-112077"
	I0729 01:59:55.138509       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 01:59:55.143051       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0729 01:59:55.145466       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0729 01:59:55.200191       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 01:59:55.209230       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 01:59:55.229085       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 01:59:55.231548       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 01:59:55.248741       1 shared_informer.go:320] Caches are synced for expand
	I0729 01:59:55.257329       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 01:59:55.307102       1 shared_informer.go:320] Caches are synced for disruption
	I0729 01:59:55.319484       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 01:59:55.320270       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 01:59:55.320460       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.766µs"
	I0729 01:59:55.331779       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 01:59:55.356167       1 shared_informer.go:320] Caches are synced for deployment
	I0729 01:59:55.752764       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 01:59:55.752911       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 01:59:55.770026       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [4b6577660f08ced09a77d6f9c12a6fe589132ad5be89e697d6db7f740a6c16e4] <==
	I0729 01:59:31.902071       1 server_linux.go:69] "Using iptables proxy"
	E0729 01:59:37.737805       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-112077\": dial tcp 192.168.39.22:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.22:47348->192.168.39.22:8443: read: connection reset by peer"
	E0729 01:59:38.777366       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-112077\": dial tcp 192.168.39.22:8443: connect: connection refused"
	I0729 01:59:42.297403       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.22"]
	I0729 01:59:42.425141       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 01:59:42.425349       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 01:59:42.425522       1 server_linux.go:165] "Using iptables Proxier"
	I0729 01:59:42.433745       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 01:59:42.434743       1 server.go:872] "Version info" version="v1.30.3"
	I0729 01:59:42.434823       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:59:42.443211       1 config.go:192] "Starting service config controller"
	I0729 01:59:42.448045       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 01:59:42.443670       1 config.go:101] "Starting endpoint slice config controller"
	I0729 01:59:42.448115       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 01:59:42.444532       1 config.go:319] "Starting node config controller"
	I0729 01:59:42.448129       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 01:59:42.548489       1 shared_informer.go:320] Caches are synced for node config
	I0729 01:59:42.548519       1 shared_informer.go:320] Caches are synced for service config
	I0729 01:59:42.548559       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [74b922622ee312381369aaa446b5c7db776cc251adbcb6b34e565512671f9e50] <==
	
	
	==> kube-scheduler [7ee3b6f7935d7110fe5162cfb681e14f2b52f10c0f0df7e11ed5863c710e7424] <==
	I0729 01:59:39.882831       1 serving.go:380] Generated self-signed cert in-memory
	W0729 01:59:42.229672       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 01:59:42.229781       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 01:59:42.229796       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 01:59:42.229806       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 01:59:42.357741       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 01:59:42.357791       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 01:59:42.368298       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 01:59:42.371056       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 01:59:42.371205       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 01:59:42.371299       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 01:59:42.474111       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [85a42262e085870cb0271aeb77fa37cf2df79478b07b02a0e202030aa7841d9c] <==
	I0729 01:59:16.923241       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.597887    3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0214a0633ffd83097e82bb4653d76e15-kubeconfig\") pod \"kube-controller-manager-pause-112077\" (UID: \"0214a0633ffd83097e82bb4653d76e15\") " pod="kube-system/kube-controller-manager-pause-112077"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.597905    3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bee17b01eb59302a56a478e8f065fe54-kubeconfig\") pod \"kube-scheduler-pause-112077\" (UID: \"bee17b01eb59302a56a478e8f065fe54\") " pod="kube-system/kube-scheduler-pause-112077"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.597989    3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/d466c7c5637c35513d90103d11d837ec-etcd-certs\") pod \"etcd-pause-112077\" (UID: \"d466c7c5637c35513d90103d11d837ec\") " pod="kube-system/etcd-pause-112077"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.598011    3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/d466c7c5637c35513d90103d11d837ec-etcd-data\") pod \"etcd-pause-112077\" (UID: \"d466c7c5637c35513d90103d11d837ec\") " pod="kube-system/etcd-pause-112077"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.598030    3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e233f10ca65887d6f7104393588a521b-k8s-certs\") pod \"kube-apiserver-pause-112077\" (UID: \"e233f10ca65887d6f7104393588a521b\") " pod="kube-system/kube-apiserver-pause-112077"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.686378    3399 kubelet_node_status.go:73] "Attempting to register node" node="pause-112077"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: E0729 01:59:38.687635    3399 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.22:8443: connect: connection refused" node="pause-112077"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.821379    3399 scope.go:117] "RemoveContainer" containerID="8ae9f347b1aff4a099b4dca75aac9fe8fb72e4e291b52ee9c8a7c90949e11a35"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.822647    3399 scope.go:117] "RemoveContainer" containerID="85a42262e085870cb0271aeb77fa37cf2df79478b07b02a0e202030aa7841d9c"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.824295    3399 scope.go:117] "RemoveContainer" containerID="9d73da1cfbd34155ca352d5d60df41bc58831aa78fffe4950273ff80b41afcc0"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: I0729 01:59:38.835816    3399 scope.go:117] "RemoveContainer" containerID="4385bc3017a2edbbeeb6961df651205eef4a8ada814e2ade19898ef4ec240209"
	Jul 29 01:59:38 pause-112077 kubelet[3399]: E0729 01:59:38.989295    3399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-112077?timeout=10s\": dial tcp 192.168.39.22:8443: connect: connection refused" interval="800ms"
	Jul 29 01:59:39 pause-112077 kubelet[3399]: I0729 01:59:39.089244    3399 kubelet_node_status.go:73] "Attempting to register node" node="pause-112077"
	Jul 29 01:59:39 pause-112077 kubelet[3399]: E0729 01:59:39.090271    3399 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.22:8443: connect: connection refused" node="pause-112077"
	Jul 29 01:59:39 pause-112077 kubelet[3399]: I0729 01:59:39.892029    3399 kubelet_node_status.go:73] "Attempting to register node" node="pause-112077"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.359309    3399 apiserver.go:52] "Watching apiserver"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.390441    3399 topology_manager.go:215] "Topology Admit Handler" podUID="709db69f-5c21-49dd-b30d-3395f0043e30" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2krfb"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.391749    3399 topology_manager.go:215] "Topology Admit Handler" podUID="7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c" podNamespace="kube-system" podName="kube-proxy-m6zq2"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.482434    3399 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.569223    3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c-xtables-lock\") pod \"kube-proxy-m6zq2\" (UID: \"7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c\") " pod="kube-system/kube-proxy-m6zq2"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.569307    3399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c-lib-modules\") pod \"kube-proxy-m6zq2\" (UID: \"7e1b7cd6-03b1-4cf4-9378-cbbf06d75a7c\") " pod="kube-system/kube-proxy-m6zq2"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.771518    3399 kubelet_node_status.go:112] "Node was previously registered" node="pause-112077"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.771675    3399 kubelet_node_status.go:76] "Successfully registered node" node="pause-112077"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.773669    3399 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 01:59:42 pause-112077 kubelet[3399]: I0729 01:59:42.775403    3399 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 02:00:05.098479   59815 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-9421/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-112077 -n pause-112077
helpers_test.go:261: (dbg) Run:  kubectl --context pause-112077 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (66.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (7200.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0729 02:41:10.265764   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 02:41:27.074002   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kindnet-464146/client.crt: no such file or directory
E0729 02:41:27.215330   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 02:41:44.870873   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/enable-default-cni-464146/client.crt: no such file or directory
E0729 02:41:49.007098   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/no-preload-944718/client.crt: no such file or directory
E0729 02:42:07.867923   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/calico-464146/client.crt: no such file or directory
E0729 02:42:09.684492   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/flannel-464146/client.crt: no such file or directory
E0729 02:42:16.692093   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/no-preload-944718/client.crt: no such file or directory
E0729 02:42:23.070944   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
E0729 02:42:31.700159   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/old-k8s-version-403582/client.crt: no such file or directory
E0729 02:42:39.448795   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/custom-flannel-464146/client.crt: no such file or directory
E0729 02:42:54.952290   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/bridge-464146/client.crt: no such file or directory
E0729 02:42:59.384487   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/old-k8s-version-403582/client.crt: no such file or directory
E0729 02:43:41.824196   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/enable-default-cni-464146/client.crt: no such file or directory
E0729 02:44:06.637988   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/flannel-464146/client.crt: no such file or directory
E0729 02:44:51.908257   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/bridge-464146/client.crt: no such file or directory
E0729 02:45:47.459236   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/auto-464146/client.crt: no such file or directory
E0729 02:46:27.074081   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/kindnet-464146/client.crt: no such file or directory
E0729 02:46:27.215463   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestStartStop (50m35s)
	TestStartStop/group/default-k8s-diff-port (29m33s)
	TestStartStop/group/default-k8s-diff-port/serial (29m33s)
	TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5m55s)

                                                
                                                
goroutine 8427 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 12 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000766d00, 0xc00121fbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000136a08, {0x49d2140, 0x2b, 0x2b}, {0x26b6f92?, 0xc000981200?, 0x4a8ea80?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0008bec80)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0008bec80)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 6 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0001aca80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 38 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 37
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 188 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00081acd0, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0008ff920)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00081ad80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000988cf0, {0x3696bc0, 0xc00090e750}, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000988cf0, 0x3b9aca00, 0x0, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 173
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 4302 [chan receive, 24 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00081bdc0, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4300
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3703 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3702
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 172 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0008ffa40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 161
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3621 [chan receive, 42 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001ce63c0, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3677
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 173 [chan receive, 115 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00081ad80, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 161
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 202 [select, 115 minutes]:
net/http.(*persistConn).readLoop(0xc0006e3320)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 141
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 122 [select, 115 minutes]:
net/http.(*persistConn).readLoop(0xc001433560)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 204
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 189 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bac20, 0xc0000600c0}, 0xc000505f50, 0xc0000a6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bac20, 0xc0000600c0}, 0x0?, 0xc000505f50, 0xc000505f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bac20?, 0xc0000600c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 173
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 190 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 189
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3933 [chan receive, 5 minutes]:
testing.(*T).Run(0xc00141e340, {0x2682679?, 0x60400000004?}, 0xc0016dc000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00141e340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00141e340, 0xc000848400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2795
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3437 [chan receive, 43 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00081b980, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3409
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 203 [select, 115 minutes]:
net/http.(*persistConn).writeLoop(0xc0006e3320)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 141
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 123 [select, 115 minutes]:
net/http.(*persistConn).writeLoop(0xc001433560)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 204
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 3850 [chan receive, 40 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00122b200, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3833
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4456 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc00081bd90, 0x14)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001878fc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00081bdc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0018ae210, {0x3696bc0, 0xc000b22d80}, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0018ae210, 0x3b9aca00, 0x0, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4302
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 4301 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0018790e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 4300
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3050 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3049
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3190 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bac20, 0xc0000600c0}, 0xc001481750, 0xc00189df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bac20, 0xc0000600c0}, 0xc0?, 0xc001481750, 0xc001481798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bac20?, 0xc0000600c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014817d0?, 0x592e44?, 0xc000060fc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3145
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 4457 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bac20, 0xc0000600c0}, 0xc00139df50, 0xc00139df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bac20, 0xc0000600c0}, 0xc0?, 0xc00139df50, 0xc00139df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bac20?, 0xc0000600c0?}, 0xc000766ea0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00139dfd0?, 0x592e44?, 0xc0015eb5c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4302
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3702 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bac20, 0xc0000600c0}, 0xc001487750, 0xc00138cf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bac20, 0xc0000600c0}, 0x80?, 0xc001487750, 0xc001487798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bac20?, 0xc0000600c0?}, 0x99b656?, 0xc001706900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc0001fe480?, 0xc001d16180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3621
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3318 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0019369c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3281
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 6725 [IO wait]:
internal/poll.runtime_pollWait(0x7fb57dea37d8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0014d3700?, 0xc001943000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0014d3700, {0xc001943000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0014d3700, {0xc001943000?, 0xc000017e00?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0008b1030, {0xc001943000?, 0xc00194305f?, 0x70?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc001f431d0, {0xc001943000?, 0x0?, 0xc001f431d0?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0001b29b0, {0x3697360, 0xc001f431d0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0001b2708, {0x7fb57ccd59e0, 0xc001a412c0}, 0xc0013b5980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0001b2708, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc0001b2708, {0xc00125b000, 0x1000, 0xc000683c00?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc001a75200, {0xc0015ed620, 0x9, 0x498dc30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3695800, 0xc001a75200}, {0xc0015ed620, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0015ed620, 0x9, 0x13b5dc0?}, {0x3695800?, 0xc001a75200?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0015ed5e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0013b5fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00191de00)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 6724
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 3145 [chan receive, 44 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00122a680, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3143
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3943 [chan receive, 29 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00042b200, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3970
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1107 [chan send, 104 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b02000, 0xc0015ebc80)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1106
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 1464 [select, 104 minutes]:
net/http.(*persistConn).writeLoop(0xc001d54900)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1447
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 3477 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc00081b950, 0x1a)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001f447e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00081b980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b2a2c0, {0x3696bc0, 0xc0016e2300}, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b2a2c0, 0x3b9aca00, 0x0, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3437
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1254 [chan send, 104 minutes]:
os/exec.(*Cmd).watchCtx(0xc001616480, 0xc0015e9bc0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1253
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3558 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0000ef9d0, 0x1a)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001daaa20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0000efa00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000988a00, {0x3696bc0, 0xc0015b2300}, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000988a00, 0x3b9aca00, 0x0, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3548
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 4458 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4457
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2792 [chan receive, 11 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0015fe820, 0x313b360)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2376
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3331 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3330
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3849 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001f44d20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3833
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2962 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bac20, 0xc0000600c0}, 0xc000505750, 0xc00154cf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bac20, 0xc0000600c0}, 0xa0?, 0xc000505750, 0xc000505798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bac20?, 0xc0000600c0?}, 0xc00141e340?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0005057d0?, 0x592e44?, 0xc0015ea2a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2956
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 768 [IO wait, 108 minutes]:
internal/poll.runtime_pollWait(0x7fb57dea3ea0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0x11?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc001224500)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc001224500)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0008149c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0008149c0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0004ee0f0, {0x36ada60, 0xc0008149c0})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0004ee0f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00141eb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 765
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 999 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 998
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3144 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0013930e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3143
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3137 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc001d0b890, 0x1a)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0019368a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d0b8c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006ea1d0, {0x3696bc0, 0xc0016e2030}, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006ea1d0, 0x3b9aca00, 0x0, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3319
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3976 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3975
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1028 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001392b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 970
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2795 [chan receive, 29 minutes]:
testing.(*T).Run(0xc0015feea0, {0x265db6f?, 0x0?}, 0xc000848400)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0015feea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0015feea0, 0xc0000ee340)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2792
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 8069 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36ba9f0, 0xc002215d10}, {0x36ae150, 0xc001d10760}, 0x1, 0x0, 0xc00006fb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36baa60?, 0xc0007d2150?}, 0x3b9aca00, 0xc00006fd38?, 0x1, 0xc00006fb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36baa60, 0xc0007d2150}, 0xc0014dc9c0, {0xc001456220, 0x1c}, {0x2682615, 0x14}, {0x269a1ef, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36baa60, 0xc0007d2150}, 0xc0014dc9c0, {0xc001456220, 0x1c}, {0x268550f?, 0xc000505f60?}, {0x551133?, 0x4a170f?}, {0xc0008c0700, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x13b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0014dc9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0014dc9c0, 0xc0016dc000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3933
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3620 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001a74c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3677
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1278 [chan send, 104 minutes]:
os/exec.(*Cmd).watchCtx(0xc001552900, 0xc001896240)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 941
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 997 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc001bb8890, 0x29)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001392a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001bb88c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0019b2dc0, {0x3696bc0, 0xc00121e810}, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0019b2dc0, 0x3b9aca00, 0x0, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1029
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3049 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bac20, 0xc0000600c0}, 0xc000509f50, 0xc000509f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bac20, 0xc0000600c0}, 0xa0?, 0xc000509f50, 0xc000509f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bac20?, 0xc0000600c0?}, 0xc00141eea0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000509fd0?, 0x592e44?, 0xc001f401b0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3021
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2897 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0009106d0, 0x1b)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001ac54a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000910700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000988650, {0x3696bc0, 0xc001f40750}, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000988650, 0x3b9aca00, 0x0, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2956
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1029 [chan receive, 106 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001bb88c0, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 970
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3974 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc00042b1d0, 0x5)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001392840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00042b200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013537f0, {0x3696bc0, 0xc001aca000}, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0013537f0, 0x3b9aca00, 0x0, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3943
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3548 [chan receive, 43 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0000efa00, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3546
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3189 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc00122a550, 0x1a)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001392fc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00122a680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001352730, {0x3696bc0, 0xc00121e060}, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001352730, 0x3b9aca00, 0x0, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3145
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3837 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc00122b1d0, 0x17)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001f44c00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00122b200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008e8c80, {0x3696bc0, 0xc00162a000}, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008e8c80, 0x3b9aca00, 0x0, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3850
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1463 [select, 104 minutes]:
net/http.(*persistConn).readLoop(0xc001d54900)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1447
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 998 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bac20, 0xc0000600c0}, 0xc0019e4f50, 0xc001601f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bac20, 0xc0000600c0}, 0x89?, 0xc0019e4f50, 0xc0019e4f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bac20?, 0xc0000600c0?}, 0x8b484874c084ffff?, 0xffff9c88e8482444?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0019e4fd0?, 0x592e44?, 0xc0312ceb00000001?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1029
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3838 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bac20, 0xc0000600c0}, 0xc001777750, 0xc00154ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bac20, 0xc0000600c0}, 0x0?, 0xc001777750, 0xc001777798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bac20?, 0xc0000600c0?}, 0x20200a0d6c616e72?, 0x3931202020202020?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc001788300?, 0xc0015ebb00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3850
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3048 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001d0b950, 0x1b)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001b17260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d0b980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00148edd0, {0x3696bc0, 0xc001323ef0}, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00148edd0, 0x3b9aca00, 0x0, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3021
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2956 [chan receive, 46 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000910700, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2938
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3839 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3838
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3559 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bac20, 0xc0000600c0}, 0xc001398750, 0xc001398798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bac20, 0xc0000600c0}, 0x80?, 0xc001398750, 0xc001398798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bac20?, 0xc0000600c0?}, 0x10000000099b656?, 0xc001a3c300?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc000225380?, 0xc0005ff080?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3548
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3701 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001ce6390, 0x1a)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001a74b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001ce63c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000988c30, {0x3696bc0, 0xc0015b2450}, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000988c30, 0x3b9aca00, 0x0, 0x1, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3621
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3478 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bac20, 0xc0000600c0}, 0xc001480f50, 0xc001480f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bac20, 0xc0000600c0}, 0xce?, 0xc001480f50, 0xc001480f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bac20?, 0xc0000600c0?}, 0xc001d79ba0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001480fd0?, 0x592e44?, 0xc00121ec60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3437
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3436 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001f449c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3409
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3020 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001b17380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3019
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3191 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3190
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2376 [chan receive, 51 minutes]:
testing.(*T).Run(0xc001d78340, {0x265c5c9?, 0x551133?}, 0x313b360)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc001d78340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc001d78340, 0x313b188)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3021 [chan receive, 46 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d0b980, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3019
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2955 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001ac56e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2938
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3330 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bac20, 0xc0000600c0}, 0xc0019e4750, 0xc00189cf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bac20, 0xc0000600c0}, 0x0?, 0xc0019e4750, 0xc0019e4798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bac20?, 0xc0000600c0?}, 0xc00141eea0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0019e47d0?, 0x592e44?, 0xc001d80090?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3319
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3479 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3478
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3319 [chan receive, 44 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d0b8c0, 0xc0000600c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3281
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3547 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001daab40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3546
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2963 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2962
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3942 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001392c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3970
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3560 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3559
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3975 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bac20, 0xc0000600c0}, 0xc000095f50, 0xc000095f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bac20, 0xc0000600c0}, 0x11?, 0xc000095f50, 0xc000095f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bac20?, 0xc0000600c0?}, 0xc00141e340?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000095fd0?, 0x592e44?, 0xc00028ea00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3943
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                    

Test pass (226/278)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 54.96
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 21.72
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-beta.0/json-events 55.08
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.54
31 TestOffline 87.14
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 147.74
40 TestAddons/serial/GCPAuth/Namespaces 0.15
42 TestAddons/parallel/Registry 16.36
44 TestAddons/parallel/InspektorGadget 10.92
46 TestAddons/parallel/HelmTiller 12.72
48 TestAddons/parallel/CSI 83.74
49 TestAddons/parallel/Headlamp 18.78
50 TestAddons/parallel/CloudSpanner 5.56
51 TestAddons/parallel/LocalPath 56.02
52 TestAddons/parallel/NvidiaDevicePlugin 6.59
53 TestAddons/parallel/Yakd 11.94
55 TestCertOptions 64.59
56 TestCertExpiration 287.14
58 TestForceSystemdFlag 87.86
59 TestForceSystemdEnv 42.86
61 TestKVMDriverInstallOrUpdate 5.44
65 TestErrorSpam/setup 41.77
66 TestErrorSpam/start 0.32
67 TestErrorSpam/status 0.73
68 TestErrorSpam/pause 1.53
69 TestErrorSpam/unpause 1.61
70 TestErrorSpam/stop 5.22
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 57.88
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 38.52
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.08
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.21
82 TestFunctional/serial/CacheCmd/cache/add_local 2.26
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 34.26
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.54
93 TestFunctional/serial/LogsFileCmd 1.52
94 TestFunctional/serial/InvalidService 4.06
96 TestFunctional/parallel/ConfigCmd 0.27
97 TestFunctional/parallel/DashboardCmd 16.06
98 TestFunctional/parallel/DryRun 0.27
99 TestFunctional/parallel/InternationalLanguage 0.18
100 TestFunctional/parallel/StatusCmd 1.2
104 TestFunctional/parallel/ServiceCmdConnect 18.52
105 TestFunctional/parallel/AddonsCmd 0.11
106 TestFunctional/parallel/PersistentVolumeClaim 46.63
108 TestFunctional/parallel/SSHCmd 0.41
109 TestFunctional/parallel/CpCmd 1.26
110 TestFunctional/parallel/MySQL 24.67
111 TestFunctional/parallel/FileSync 0.19
112 TestFunctional/parallel/CertSync 1.23
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
120 TestFunctional/parallel/License 0.68
121 TestFunctional/parallel/Version/short 0.04
122 TestFunctional/parallel/Version/components 0.56
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.46
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
127 TestFunctional/parallel/ImageCommands/ImageBuild 6
128 TestFunctional/parallel/ImageCommands/Setup 1.97
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
137 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.35
142 TestFunctional/parallel/MountCmd/any-port 18.49
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.17
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.23
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.05
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.78
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.66
149 TestFunctional/parallel/MountCmd/specific-port 1.66
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.66
151 TestFunctional/parallel/ServiceCmd/DeployApp 10.53
152 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
153 TestFunctional/parallel/ProfileCmd/profile_list 0.33
154 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
155 TestFunctional/parallel/ServiceCmd/List 1.24
156 TestFunctional/parallel/ServiceCmd/JSONOutput 1.28
157 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
158 TestFunctional/parallel/ServiceCmd/Format 0.38
159 TestFunctional/parallel/ServiceCmd/URL 0.33
160 TestFunctional/delete_echo-server_images 0.03
161 TestFunctional/delete_my-image_image 0.02
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 225.27
167 TestMultiControlPlane/serial/DeployApp 6.74
168 TestMultiControlPlane/serial/PingHostFromPods 1.27
169 TestMultiControlPlane/serial/AddWorkerNode 59.17
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
172 TestMultiControlPlane/serial/CopyFile 12.68
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.38
178 TestMultiControlPlane/serial/DeleteSecondaryNode 18.13
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
181 TestMultiControlPlane/serial/RestartCluster 289.43
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
183 TestMultiControlPlane/serial/AddSecondaryNode 79.96
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
188 TestJSONOutput/start/Command 95.43
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.69
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.61
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.39
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.18
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 87.69
220 TestMountStart/serial/StartWithMountFirst 27.99
221 TestMountStart/serial/VerifyMountFirst 0.37
222 TestMountStart/serial/StartWithMountSecond 25.2
223 TestMountStart/serial/VerifyMountSecond 0.37
224 TestMountStart/serial/DeleteFirst 0.69
225 TestMountStart/serial/VerifyMountPostDelete 0.36
226 TestMountStart/serial/Stop 1.27
227 TestMountStart/serial/RestartStopped 22.5
228 TestMountStart/serial/VerifyMountPostStop 0.36
231 TestMultiNode/serial/FreshStart2Nodes 125.02
232 TestMultiNode/serial/DeployApp2Nodes 5.26
233 TestMultiNode/serial/PingHostFrom2Pods 0.77
234 TestMultiNode/serial/AddNode 49.54
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.21
237 TestMultiNode/serial/CopyFile 6.88
238 TestMultiNode/serial/StopNode 2.27
239 TestMultiNode/serial/StartAfterStop 39.57
241 TestMultiNode/serial/DeleteNode 2.31
243 TestMultiNode/serial/RestartMultiNode 181.2
244 TestMultiNode/serial/ValidateNameConflict 44.18
251 TestScheduledStopUnix 115.43
255 TestRunningBinaryUpgrade 230.23
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
261 TestNoKubernetes/serial/StartWithK8s 92.99
269 TestNetworkPlugins/group/false 2.95
273 TestNoKubernetes/serial/StartWithStopK8s 71.07
274 TestNoKubernetes/serial/Start 30.59
282 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
283 TestNoKubernetes/serial/ProfileList 29.42
284 TestNoKubernetes/serial/Stop 2.07
285 TestNoKubernetes/serial/StartNoArgs 25.49
287 TestPause/serial/Start 114.45
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
289 TestStoppedBinaryUpgrade/Setup 2.58
290 TestStoppedBinaryUpgrade/Upgrade 166.4
292 TestNetworkPlugins/group/auto/Start 106.05
293 TestNetworkPlugins/group/kindnet/Start 79.03
294 TestStoppedBinaryUpgrade/MinikubeLogs 0.95
295 TestNetworkPlugins/group/calico/Start 108.49
296 TestNetworkPlugins/group/auto/KubeletFlags 0.23
297 TestNetworkPlugins/group/auto/NetCatPod 10.25
298 TestNetworkPlugins/group/auto/DNS 0.17
299 TestNetworkPlugins/group/auto/Localhost 0.14
300 TestNetworkPlugins/group/auto/HairPin 0.12
301 TestNetworkPlugins/group/custom-flannel/Start 83.63
302 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
304 TestNetworkPlugins/group/kindnet/NetCatPod 10.21
305 TestNetworkPlugins/group/kindnet/DNS 0.23
306 TestNetworkPlugins/group/kindnet/Localhost 0.21
307 TestNetworkPlugins/group/kindnet/HairPin 0.17
308 TestNetworkPlugins/group/enable-default-cni/Start 101.05
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.2
311 TestNetworkPlugins/group/calico/NetCatPod 11.26
312 TestNetworkPlugins/group/calico/DNS 0.18
313 TestNetworkPlugins/group/calico/Localhost 0.12
314 TestNetworkPlugins/group/calico/HairPin 0.14
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.35
317 TestNetworkPlugins/group/flannel/Start 82.83
318 TestNetworkPlugins/group/custom-flannel/DNS 0.16
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
321 TestNetworkPlugins/group/bridge/Start 102.02
322 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.24
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
327 TestNetworkPlugins/group/flannel/ControllerPod 6.01
330 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
331 TestNetworkPlugins/group/flannel/NetCatPod 10.23
332 TestNetworkPlugins/group/flannel/DNS 0.17
333 TestNetworkPlugins/group/flannel/Localhost 0.14
334 TestNetworkPlugins/group/flannel/HairPin 0.14
337 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
338 TestNetworkPlugins/group/bridge/NetCatPod 10.26
339 TestNetworkPlugins/group/bridge/DNS 0.14
340 TestNetworkPlugins/group/bridge/Localhost 0.11
341 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (54.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-155341 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-155341 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (54.957259965s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (54.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-155341
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-155341: exit status 85 (58.069617ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-155341 | jenkins | v1.33.1 | 29 Jul 24 00:46 UTC |          |
	|         | -p download-only-155341        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 00:46:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 00:46:45.503375   16635 out.go:291] Setting OutFile to fd 1 ...
	I0729 00:46:45.503496   16635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 00:46:45.503507   16635 out.go:304] Setting ErrFile to fd 2...
	I0729 00:46:45.503512   16635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 00:46:45.503681   16635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	W0729 00:46:45.503791   16635 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19312-9421/.minikube/config/config.json: open /home/jenkins/minikube-integration/19312-9421/.minikube/config/config.json: no such file or directory
	I0729 00:46:45.504402   16635 out.go:298] Setting JSON to true
	I0729 00:46:45.505397   16635 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1751,"bootTime":1722212254,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 00:46:45.505454   16635 start.go:139] virtualization: kvm guest
	I0729 00:46:45.507778   16635 out.go:97] [download-only-155341] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 00:46:45.507965   16635 notify.go:220] Checking for updates...
	W0729 00:46:45.507976   16635 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 00:46:45.509455   16635 out.go:169] MINIKUBE_LOCATION=19312
	I0729 00:46:45.510859   16635 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 00:46:45.512025   16635 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 00:46:45.513157   16635 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 00:46:45.514363   16635 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 00:46:45.516584   16635 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 00:46:45.516857   16635 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 00:46:45.617059   16635 out.go:97] Using the kvm2 driver based on user configuration
	I0729 00:46:45.617088   16635 start.go:297] selected driver: kvm2
	I0729 00:46:45.617094   16635 start.go:901] validating driver "kvm2" against <nil>
	I0729 00:46:45.617407   16635 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 00:46:45.617523   16635 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 00:46:45.632235   16635 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 00:46:45.632284   16635 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 00:46:45.632762   16635 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 00:46:45.632923   16635 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 00:46:45.632977   16635 cni.go:84] Creating CNI manager for ""
	I0729 00:46:45.632989   16635 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 00:46:45.632996   16635 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 00:46:45.633050   16635 start.go:340] cluster config:
	{Name:download-only-155341 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-155341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 00:46:45.633250   16635 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 00:46:45.634926   16635 out.go:97] Downloading VM boot image ...
	I0729 00:46:45.634951   16635 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19312-9421/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 00:46:56.465984   16635 out.go:97] Starting "download-only-155341" primary control-plane node in "download-only-155341" cluster
	I0729 00:46:56.466012   16635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 00:46:56.577963   16635 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 00:46:56.578010   16635 cache.go:56] Caching tarball of preloaded images
	I0729 00:46:56.578174   16635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 00:46:56.580169   16635 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 00:46:56.580192   16635 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 00:46:56.691613   16635 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 00:47:09.672493   16635 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 00:47:09.672585   16635 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 00:47:10.580038   16635 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 00:47:10.580401   16635 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/download-only-155341/config.json ...
	I0729 00:47:10.580433   16635 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/download-only-155341/config.json: {Name:mk1de961818cc47dcf8d1a8d1521ce88c445cc39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:47:10.580589   16635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 00:47:10.580756   16635 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-155341 host does not exist
	  To start a cluster, run: "minikube start -p download-only-155341"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-155341
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (21.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-639764 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-639764 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (21.724567104s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (21.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-639764
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-639764: exit status 85 (55.085836ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-155341 | jenkins | v1.33.1 | 29 Jul 24 00:46 UTC |                     |
	|         | -p download-only-155341        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 00:47 UTC | 29 Jul 24 00:47 UTC |
	| delete  | -p download-only-155341        | download-only-155341 | jenkins | v1.33.1 | 29 Jul 24 00:47 UTC | 29 Jul 24 00:47 UTC |
	| start   | -o=json --download-only        | download-only-639764 | jenkins | v1.33.1 | 29 Jul 24 00:47 UTC |                     |
	|         | -p download-only-639764        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 00:47:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 00:47:40.778385   16991 out.go:291] Setting OutFile to fd 1 ...
	I0729 00:47:40.778785   16991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 00:47:40.778848   16991 out.go:304] Setting ErrFile to fd 2...
	I0729 00:47:40.778867   16991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 00:47:40.779350   16991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 00:47:40.780433   16991 out.go:298] Setting JSON to true
	I0729 00:47:40.781240   16991 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1807,"bootTime":1722212254,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 00:47:40.781300   16991 start.go:139] virtualization: kvm guest
	I0729 00:47:40.782961   16991 out.go:97] [download-only-639764] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 00:47:40.783128   16991 notify.go:220] Checking for updates...
	I0729 00:47:40.784318   16991 out.go:169] MINIKUBE_LOCATION=19312
	I0729 00:47:40.785650   16991 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 00:47:40.786914   16991 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 00:47:40.788205   16991 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 00:47:40.789430   16991 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 00:47:40.791653   16991 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 00:47:40.791871   16991 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 00:47:40.824972   16991 out.go:97] Using the kvm2 driver based on user configuration
	I0729 00:47:40.825003   16991 start.go:297] selected driver: kvm2
	I0729 00:47:40.825009   16991 start.go:901] validating driver "kvm2" against <nil>
	I0729 00:47:40.825369   16991 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 00:47:40.825464   16991 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 00:47:40.840371   16991 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 00:47:40.840425   16991 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 00:47:40.840895   16991 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 00:47:40.841053   16991 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 00:47:40.841130   16991 cni.go:84] Creating CNI manager for ""
	I0729 00:47:40.841144   16991 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 00:47:40.841152   16991 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 00:47:40.841221   16991 start.go:340] cluster config:
	{Name:download-only-639764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-639764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 00:47:40.841311   16991 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 00:47:40.843013   16991 out.go:97] Starting "download-only-639764" primary control-plane node in "download-only-639764" cluster
	I0729 00:47:40.843030   16991 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 00:47:41.439761   16991 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 00:47:41.439838   16991 cache.go:56] Caching tarball of preloaded images
	I0729 00:47:41.440006   16991 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 00:47:41.441648   16991 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 00:47:41.441666   16991 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0729 00:47:41.553728   16991 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 00:48:00.494908   16991 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0729 00:48:00.495003   16991 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-639764 host does not exist
	  To start a cluster, run: "minikube start -p download-only-639764"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-639764
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (55.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-933059 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-933059 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (55.08170077s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (55.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-933059
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-933059: exit status 85 (57.979882ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-155341 | jenkins | v1.33.1 | 29 Jul 24 00:46 UTC |                     |
	|         | -p download-only-155341             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 00:47 UTC | 29 Jul 24 00:47 UTC |
	| delete  | -p download-only-155341             | download-only-155341 | jenkins | v1.33.1 | 29 Jul 24 00:47 UTC | 29 Jul 24 00:47 UTC |
	| start   | -o=json --download-only             | download-only-639764 | jenkins | v1.33.1 | 29 Jul 24 00:47 UTC |                     |
	|         | -p download-only-639764             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 00:48 UTC | 29 Jul 24 00:48 UTC |
	| delete  | -p download-only-639764             | download-only-639764 | jenkins | v1.33.1 | 29 Jul 24 00:48 UTC | 29 Jul 24 00:48 UTC |
	| start   | -o=json --download-only             | download-only-933059 | jenkins | v1.33.1 | 29 Jul 24 00:48 UTC |                     |
	|         | -p download-only-933059             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 00:48:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 00:48:02.804138   17265 out.go:291] Setting OutFile to fd 1 ...
	I0729 00:48:02.804230   17265 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 00:48:02.804234   17265 out.go:304] Setting ErrFile to fd 2...
	I0729 00:48:02.804239   17265 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 00:48:02.804393   17265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 00:48:02.804950   17265 out.go:298] Setting JSON to true
	I0729 00:48:02.805697   17265 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1829,"bootTime":1722212254,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 00:48:02.805749   17265 start.go:139] virtualization: kvm guest
	I0729 00:48:02.807717   17265 out.go:97] [download-only-933059] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 00:48:02.807877   17265 notify.go:220] Checking for updates...
	I0729 00:48:02.809473   17265 out.go:169] MINIKUBE_LOCATION=19312
	I0729 00:48:02.810978   17265 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 00:48:02.812354   17265 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 00:48:02.813569   17265 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 00:48:02.814917   17265 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 00:48:02.817564   17265 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 00:48:02.817762   17265 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 00:48:02.850319   17265 out.go:97] Using the kvm2 driver based on user configuration
	I0729 00:48:02.850344   17265 start.go:297] selected driver: kvm2
	I0729 00:48:02.850349   17265 start.go:901] validating driver "kvm2" against <nil>
	I0729 00:48:02.850654   17265 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 00:48:02.850725   17265 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-9421/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 00:48:02.866042   17265 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 00:48:02.866110   17265 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 00:48:02.866621   17265 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 00:48:02.866804   17265 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 00:48:02.866832   17265 cni.go:84] Creating CNI manager for ""
	I0729 00:48:02.866841   17265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 00:48:02.866857   17265 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 00:48:02.866922   17265 start.go:340] cluster config:
	{Name:download-only-933059 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-933059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 00:48:02.867072   17265 iso.go:125] acquiring lock: {Name:mkae92bdefe00394b5e3a0cccfd3790c642b98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 00:48:02.868829   17265 out.go:97] Starting "download-only-933059" primary control-plane node in "download-only-933059" cluster
	I0729 00:48:02.868856   17265 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 00:48:03.458207   17265 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 00:48:03.458245   17265 cache.go:56] Caching tarball of preloaded images
	I0729 00:48:03.458443   17265 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 00:48:03.460257   17265 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 00:48:03.460280   17265 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 00:48:03.568678   17265 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 00:48:16.030784   17265 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 00:48:16.030900   17265 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19312-9421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 00:48:16.769244   17265 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 00:48:16.769621   17265 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/download-only-933059/config.json ...
	I0729 00:48:16.769659   17265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/download-only-933059/config.json: {Name:mk6a7c141a2d225572a9ecfbbd942fb5a491fc42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 00:48:16.769829   17265 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 00:48:16.769983   17265 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19312-9421/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-933059 host does not exist
	  To start a cluster, run: "minikube start -p download-only-933059"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-933059
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-899353 --alsologtostderr --binary-mirror http://127.0.0.1:44815 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-899353" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-899353
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (87.14s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-684076 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-684076 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m26.080643552s)
helpers_test.go:175: Cleaning up "offline-crio-684076" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-684076
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-684076: (1.055506411s)
--- PASS: TestOffline (87.14s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-657805
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-657805: exit status 85 (56.531359ms)

                                                
                                                
-- stdout --
	* Profile "addons-657805" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-657805"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-657805
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-657805: exit status 85 (56.776405ms)

                                                
                                                
-- stdout --
	* Profile "addons-657805" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-657805"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (147.74s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-657805 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-657805 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m27.742519455s)
--- PASS: TestAddons/Setup (147.74s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-657805 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-657805 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.831981ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-vvt4p" [c2c15540-cbdd-4d9d-93ee-242fed10a376] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011313982s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4dnlr" [776b01e7-fab4-4418-bc4f-350a057e9cd4] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004396454s
addons_test.go:342: (dbg) Run:  kubectl --context addons-657805 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-657805 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-657805 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.220753363s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-657805 ip
2024/07/29 00:52:01 [DEBUG] GET http://192.168.39.18:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-657805 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.36s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.92s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hwp5k" [c825cafb-1f64-49da-9e19-758559921f81] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.01259833s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-657805
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-657805: (5.905207092s)
--- PASS: TestAddons/parallel/InspektorGadget (10.92s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.72s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.426075ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-ctj2p" [19ff6eb3-431f-4705-9f70-09fb802cccd1] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005595542s
addons_test.go:475: (dbg) Run:  kubectl --context addons-657805 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-657805 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.106544249s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-657805 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.72s)

                                                
                                    
x
+
TestAddons/parallel/CSI (83.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.1845ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-657805 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-657805 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f7aa49b8-d92d-4dcc-94b5-1308b99353ae] Pending
helpers_test.go:344: "task-pv-pod" [f7aa49b8-d92d-4dcc-94b5-1308b99353ae] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f7aa49b8-d92d-4dcc-94b5-1308b99353ae] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003768863s
addons_test.go:590: (dbg) Run:  kubectl --context addons-657805 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-657805 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-657805 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-657805 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-657805 delete pod task-pv-pod: (1.226574566s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-657805 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-657805 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-657805 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [14c0fec7-4b15-4051-9a36-68907a275010] Pending
helpers_test.go:344: "task-pv-pod-restore" [14c0fec7-4b15-4051-9a36-68907a275010] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [14c0fec7-4b15-4051-9a36-68907a275010] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005807768s
addons_test.go:632: (dbg) Run:  kubectl --context addons-657805 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-657805 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-657805 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-657805 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-657805 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.831419607s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-657805 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (83.74s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-657805 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-657805 --alsologtostderr -v=1: (1.019706381s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-rm2tm" [37016442-9560-4608-811c-61cc9bfff166] Pending
helpers_test.go:344: "headlamp-7867546754-rm2tm" [37016442-9560-4608-811c-61cc9bfff166] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-rm2tm" [37016442-9560-4608-811c-61cc9bfff166] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004416332s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-657805 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-657805 addons disable headlamp --alsologtostderr -v=1: (5.75403017s)
--- PASS: TestAddons/parallel/Headlamp (18.78s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-m8db4" [09a17ff7-2761-458d-98e4-4ec53e5a7eb4] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004028869s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-657805
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.02s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-657805 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-657805 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-657805 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9239ac70-482e-49cd-b1a5-311b6cd4b096] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9239ac70-482e-49cd-b1a5-311b6cd4b096] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9239ac70-482e-49cd-b1a5-311b6cd4b096] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003719703s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-657805 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-657805 ssh "cat /opt/local-path-provisioner/pvc-e4f965f3-bc18-4e6c-89fd-eee01e8cf9ee_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-657805 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-657805 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-657805 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-657805 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.228294477s)
--- PASS: TestAddons/parallel/LocalPath (56.02s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-q9787" [88e23009-4d91-4d63-b0ed-514cd85efcad] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005079332s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-657805
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-hmgs2" [d1e99dbe-60aa-482c-a260-c782cea74a3a] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005882041s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-657805 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-657805 addons disable yakd --alsologtostderr -v=1: (5.933497897s)
--- PASS: TestAddons/parallel/Yakd (11.94s)

                                                
                                    
x
+
TestCertOptions (64.59s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-343391 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0729 01:56:27.215422   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-343391 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m3.32541741s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-343391 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-343391 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-343391 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-343391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-343391
--- PASS: TestCertOptions (64.59s)

                                                
                                    
x
+
TestCertExpiration (287.14s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-923851 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-923851 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m23.672373841s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-923851 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-923851 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (22.58762636s)
helpers_test.go:175: Cleaning up "cert-expiration-923851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-923851
--- PASS: TestCertExpiration (287.14s)

                                                
                                    
x
+
TestForceSystemdFlag (87.86s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-137446 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-137446 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m26.528343342s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-137446 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-137446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-137446
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-137446: (1.132527197s)
--- PASS: TestForceSystemdFlag (87.86s)

                                                
                                    
x
+
TestForceSystemdEnv (42.86s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-709905 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-709905 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (41.866292009s)
helpers_test.go:175: Cleaning up "force-systemd-env-709905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-709905
--- PASS: TestForceSystemdEnv (42.86s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.44s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.44s)

                                                
                                    
x
+
TestErrorSpam/setup (41.77s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-278688 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-278688 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-278688 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-278688 --driver=kvm2  --container-runtime=crio: (41.774599264s)
--- PASS: TestErrorSpam/setup (41.77s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 unpause
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (5.22s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 stop: (1.599211883s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 stop: (2.035653894s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-278688 --log_dir /tmp/nospam-278688 stop: (1.581772477s)
--- PASS: TestErrorSpam/stop (5.22s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19312-9421/.minikube/files/etc/test/nested/copy/16623/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.88s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-512161 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-512161 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (57.882984158s)
--- PASS: TestFunctional/serial/StartWithProxy (57.88s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.52s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-512161 --alsologtostderr -v=8
E0729 01:01:27.215130   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 01:01:27.220914   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 01:01:27.231143   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 01:01:27.251441   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 01:01:27.291755   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 01:01:27.372090   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 01:01:27.532511   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 01:01:27.852874   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 01:01:28.493521   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 01:01:29.773847   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 01:01:32.334950   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-512161 --alsologtostderr -v=8: (38.514626531s)
functional_test.go:663: soft start took 38.515427189s for "functional-512161" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.52s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-512161 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-512161 cache add registry.k8s.io/pause:3.3: (1.17312496s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-512161 cache add registry.k8s.io/pause:latest: (1.040286448s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-512161 /tmp/TestFunctionalserialCacheCmdcacheadd_local3424163282/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 cache add minikube-local-cache-test:functional-512161
E0729 01:01:37.455985   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-512161 cache add minikube-local-cache-test:functional-512161: (1.930050297s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 cache delete minikube-local-cache-test:functional-512161
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-512161
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-512161 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.498198ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 kubectl -- --context functional-512161 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-512161 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-512161 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0729 01:01:47.696818   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 01:02:08.177524   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-512161 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.262820346s)
functional_test.go:761: restart took 34.262927213s for "functional-512161" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-512161 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-512161 logs: (1.541137302s)
--- PASS: TestFunctional/serial/LogsCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 logs --file /tmp/TestFunctionalserialLogsFileCmd1147365958/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-512161 logs --file /tmp/TestFunctionalserialLogsFileCmd1147365958/001/logs.txt: (1.518947574s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.06s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-512161 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-512161
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-512161: exit status 115 (263.309506ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.145:30668 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-512161 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-512161 config get cpus: exit status 14 (41.113696ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-512161 config get cpus: exit status 14 (41.318816ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-512161 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-512161 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 26788: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.06s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-512161 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-512161 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (130.732334ms)

                                                
                                                
-- stdout --
	* [functional-512161] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:02:49.988690   26697 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:02:49.988829   26697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:02:49.988840   26697 out.go:304] Setting ErrFile to fd 2...
	I0729 01:02:49.988845   26697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:02:49.989032   26697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:02:49.989552   26697 out.go:298] Setting JSON to false
	I0729 01:02:49.990472   26697 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2716,"bootTime":1722212254,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 01:02:49.990532   26697 start.go:139] virtualization: kvm guest
	I0729 01:02:49.992651   26697 out.go:177] * [functional-512161] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 01:02:49.994103   26697 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 01:02:49.994142   26697 notify.go:220] Checking for updates...
	I0729 01:02:49.996636   26697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 01:02:49.997920   26697 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:02:49.999252   26697 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:02:50.000653   26697 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 01:02:50.002030   26697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 01:02:50.003665   26697 config.go:182] Loaded profile config "functional-512161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:02:50.004045   26697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:02:50.004119   26697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:02:50.019272   26697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I0729 01:02:50.019754   26697 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:02:50.020313   26697 main.go:141] libmachine: Using API Version  1
	I0729 01:02:50.020333   26697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:02:50.020645   26697 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:02:50.020845   26697 main.go:141] libmachine: (functional-512161) Calling .DriverName
	I0729 01:02:50.021086   26697 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 01:02:50.021403   26697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:02:50.021444   26697 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:02:50.036016   26697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I0729 01:02:50.036499   26697 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:02:50.036949   26697 main.go:141] libmachine: Using API Version  1
	I0729 01:02:50.036969   26697 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:02:50.037234   26697 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:02:50.037427   26697 main.go:141] libmachine: (functional-512161) Calling .DriverName
	I0729 01:02:50.070605   26697 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 01:02:50.072065   26697 start.go:297] selected driver: kvm2
	I0729 01:02:50.072078   26697 start.go:901] validating driver "kvm2" against &{Name:functional-512161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-512161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:02:50.072186   26697 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 01:02:50.074310   26697 out.go:177] 
	W0729 01:02:50.075513   26697 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 01:02:50.076838   26697 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-512161 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-512161 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-512161 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (181.811292ms)

                                                
                                                
-- stdout --
	* [functional-512161] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:02:48.629062   26562 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:02:48.631046   26562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:02:48.631120   26562 out.go:304] Setting ErrFile to fd 2...
	I0729 01:02:48.631140   26562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:02:48.631736   26562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:02:48.632469   26562 out.go:298] Setting JSON to false
	I0729 01:02:48.633648   26562 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2715,"bootTime":1722212254,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 01:02:48.633767   26562 start.go:139] virtualization: kvm guest
	I0729 01:02:48.639191   26562 out.go:177] * [functional-512161] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0729 01:02:48.640797   26562 notify.go:220] Checking for updates...
	I0729 01:02:48.640813   26562 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 01:02:48.642444   26562 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 01:02:48.643875   26562 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:02:48.647528   26562 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:02:48.649899   26562 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 01:02:48.651271   26562 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 01:02:48.653089   26562 config.go:182] Loaded profile config "functional-512161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:02:48.653720   26562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:02:48.653807   26562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:02:48.670228   26562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39599
	I0729 01:02:48.670633   26562 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:02:48.671251   26562 main.go:141] libmachine: Using API Version  1
	I0729 01:02:48.671272   26562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:02:48.671655   26562 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:02:48.671852   26562 main.go:141] libmachine: (functional-512161) Calling .DriverName
	I0729 01:02:48.672125   26562 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 01:02:48.672411   26562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:02:48.672439   26562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:02:48.697299   26562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43245
	I0729 01:02:48.697684   26562 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:02:48.698216   26562 main.go:141] libmachine: Using API Version  1
	I0729 01:02:48.698232   26562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:02:48.698557   26562 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:02:48.698768   26562 main.go:141] libmachine: (functional-512161) Calling .DriverName
	I0729 01:02:48.736393   26562 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0729 01:02:48.737824   26562 start.go:297] selected driver: kvm2
	I0729 01:02:48.737838   26562 start.go:901] validating driver "kvm2" against &{Name:functional-512161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-512161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 01:02:48.737955   26562 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 01:02:48.740121   26562 out.go:177] 
	W0729 01:02:48.741550   26562 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 01:02:48.743129   26562 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 status
E0729 01:02:49.138609   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (18.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-512161 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-512161 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-m5wj8" [af6ef881-41ba-4c3c-86d5-002097977c90] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-m5wj8" [af6ef881-41ba-4c3c-86d5-002097977c90] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 18.004309283s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.145:30264
functional_test.go:1675: http://192.168.39.145:30264: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-m5wj8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.145:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.145:30264
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (18.52s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [feb9201f-d816-4783-a712-c4d3a50c0c56] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004778281s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-512161 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-512161 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-512161 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-512161 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2790457e-0543-4752-ba5d-5c7bd8510850] Pending
helpers_test.go:344: "sp-pod" [2790457e-0543-4752-ba5d-5c7bd8510850] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2790457e-0543-4752-ba5d-5c7bd8510850] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.004375909s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-512161 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-512161 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-512161 delete -f testdata/storage-provisioner/pod.yaml: (1.814268548s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-512161 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [64eba92b-bdf0-4519-9255-22c501feceeb] Pending
helpers_test.go:344: "sp-pod" [64eba92b-bdf0-4519-9255-22c501feceeb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [64eba92b-bdf0-4519-9255-22c501feceeb] Running
2024/07/29 01:03:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.006711841s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-512161 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.63s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh -n functional-512161 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 cp functional-512161:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1327647498/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh -n functional-512161 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh -n functional-512161 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-512161 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-spxxg" [5ea2cbc6-19cb-42dd-9242-875a59ddc1f1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-spxxg" [5ea2cbc6-19cb-42dd-9242-875a59ddc1f1] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.003634885s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-512161 exec mysql-64454c8b5c-spxxg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-512161 exec mysql-64454c8b5c-spxxg -- mysql -ppassword -e "show databases;": exit status 1 (187.328924ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-512161 exec mysql-64454c8b5c-spxxg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.67s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/16623/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "sudo cat /etc/test/nested/copy/16623/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/16623.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "sudo cat /etc/ssl/certs/16623.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/16623.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "sudo cat /usr/share/ca-certificates/16623.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/166232.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "sudo cat /etc/ssl/certs/166232.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/166232.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "sudo cat /usr/share/ca-certificates/166232.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-512161 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-512161 ssh "sudo systemctl is-active docker": exit status 1 (214.655409ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-512161 ssh "sudo systemctl is-active containerd": exit status 1 (211.008389ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-512161 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-512161
localhost/kicbase/echo-server:functional-512161
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-512161 image ls --format short --alsologtostderr:
I0729 01:02:58.371521   27047 out.go:291] Setting OutFile to fd 1 ...
I0729 01:02:58.371780   27047 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 01:02:58.371793   27047 out.go:304] Setting ErrFile to fd 2...
I0729 01:02:58.371799   27047 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 01:02:58.372075   27047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
I0729 01:02:58.372768   27047 config.go:182] Loaded profile config "functional-512161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 01:02:58.372875   27047 config.go:182] Loaded profile config "functional-512161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 01:02:58.373426   27047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 01:02:58.373480   27047 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 01:02:58.389888   27047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40601
I0729 01:02:58.390316   27047 main.go:141] libmachine: () Calling .GetVersion
I0729 01:02:58.390967   27047 main.go:141] libmachine: Using API Version  1
I0729 01:02:58.390991   27047 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 01:02:58.391369   27047 main.go:141] libmachine: () Calling .GetMachineName
I0729 01:02:58.391569   27047 main.go:141] libmachine: (functional-512161) Calling .GetState
I0729 01:02:58.393664   27047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 01:02:58.393705   27047 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 01:02:58.408898   27047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34811
I0729 01:02:58.409346   27047 main.go:141] libmachine: () Calling .GetVersion
I0729 01:02:58.409820   27047 main.go:141] libmachine: Using API Version  1
I0729 01:02:58.409845   27047 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 01:02:58.410206   27047 main.go:141] libmachine: () Calling .GetMachineName
I0729 01:02:58.410396   27047 main.go:141] libmachine: (functional-512161) Calling .DriverName
I0729 01:02:58.410602   27047 ssh_runner.go:195] Run: systemctl --version
I0729 01:02:58.410624   27047 main.go:141] libmachine: (functional-512161) Calling .GetSSHHostname
I0729 01:02:58.413507   27047 main.go:141] libmachine: (functional-512161) DBG | domain functional-512161 has defined MAC address 52:54:00:29:02:a6 in network mk-functional-512161
I0729 01:02:58.413906   27047 main.go:141] libmachine: (functional-512161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:02:a6", ip: ""} in network mk-functional-512161: {Iface:virbr1 ExpiryTime:2024-07-29 02:00:11 +0000 UTC Type:0 Mac:52:54:00:29:02:a6 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:functional-512161 Clientid:01:52:54:00:29:02:a6}
I0729 01:02:58.413933   27047 main.go:141] libmachine: (functional-512161) DBG | domain functional-512161 has defined IP address 192.168.39.145 and MAC address 52:54:00:29:02:a6 in network mk-functional-512161
I0729 01:02:58.414108   27047 main.go:141] libmachine: (functional-512161) Calling .GetSSHPort
I0729 01:02:58.414349   27047 main.go:141] libmachine: (functional-512161) Calling .GetSSHKeyPath
I0729 01:02:58.414502   27047 main.go:141] libmachine: (functional-512161) Calling .GetSSHUsername
I0729 01:02:58.414666   27047 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/functional-512161/id_rsa Username:docker}
I0729 01:02:58.539380   27047 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 01:02:58.619324   27047 main.go:141] libmachine: Making call to close driver server
I0729 01:02:58.619340   27047 main.go:141] libmachine: (functional-512161) Calling .Close
I0729 01:02:58.619574   27047 main.go:141] libmachine: Successfully made call to close driver server
I0729 01:02:58.619600   27047 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 01:02:58.619607   27047 main.go:141] libmachine: Making call to close driver server
I0729 01:02:58.619627   27047 main.go:141] libmachine: (functional-512161) Calling .Close
I0729 01:02:58.619630   27047 main.go:141] libmachine: (functional-512161) DBG | Closing plugin on server side
I0729 01:02:58.619881   27047 main.go:141] libmachine: Successfully made call to close driver server
I0729 01:02:58.619910   27047 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-512161 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-512161  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-512161  | a05d8f87b28d5 | 3.33kB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-512161 image ls --format table --alsologtostderr:
I0729 01:03:00.717070   27295 out.go:291] Setting OutFile to fd 1 ...
I0729 01:03:00.717287   27295 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 01:03:00.717296   27295 out.go:304] Setting ErrFile to fd 2...
I0729 01:03:00.717300   27295 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 01:03:00.717477   27295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
I0729 01:03:00.718023   27295 config.go:182] Loaded profile config "functional-512161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 01:03:00.718128   27295 config.go:182] Loaded profile config "functional-512161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 01:03:00.718477   27295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 01:03:00.718515   27295 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 01:03:00.733107   27295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32945
I0729 01:03:00.733543   27295 main.go:141] libmachine: () Calling .GetVersion
I0729 01:03:00.734117   27295 main.go:141] libmachine: Using API Version  1
I0729 01:03:00.734139   27295 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 01:03:00.734483   27295 main.go:141] libmachine: () Calling .GetMachineName
I0729 01:03:00.734678   27295 main.go:141] libmachine: (functional-512161) Calling .GetState
I0729 01:03:00.736565   27295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 01:03:00.736602   27295 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 01:03:00.750722   27295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38837
I0729 01:03:00.751149   27295 main.go:141] libmachine: () Calling .GetVersion
I0729 01:03:00.751652   27295 main.go:141] libmachine: Using API Version  1
I0729 01:03:00.751674   27295 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 01:03:00.751962   27295 main.go:141] libmachine: () Calling .GetMachineName
I0729 01:03:00.752161   27295 main.go:141] libmachine: (functional-512161) Calling .DriverName
I0729 01:03:00.752351   27295 ssh_runner.go:195] Run: systemctl --version
I0729 01:03:00.752381   27295 main.go:141] libmachine: (functional-512161) Calling .GetSSHHostname
I0729 01:03:00.755296   27295 main.go:141] libmachine: (functional-512161) DBG | domain functional-512161 has defined MAC address 52:54:00:29:02:a6 in network mk-functional-512161
I0729 01:03:00.755821   27295 main.go:141] libmachine: (functional-512161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:02:a6", ip: ""} in network mk-functional-512161: {Iface:virbr1 ExpiryTime:2024-07-29 02:00:11 +0000 UTC Type:0 Mac:52:54:00:29:02:a6 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:functional-512161 Clientid:01:52:54:00:29:02:a6}
I0729 01:03:00.755845   27295 main.go:141] libmachine: (functional-512161) DBG | domain functional-512161 has defined IP address 192.168.39.145 and MAC address 52:54:00:29:02:a6 in network mk-functional-512161
I0729 01:03:00.755970   27295 main.go:141] libmachine: (functional-512161) Calling .GetSSHPort
I0729 01:03:00.756129   27295 main.go:141] libmachine: (functional-512161) Calling .GetSSHKeyPath
I0729 01:03:00.756297   27295 main.go:141] libmachine: (functional-512161) Calling .GetSSHUsername
I0729 01:03:00.756417   27295 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/functional-512161/id_rsa Username:docker}
I0729 01:03:00.854656   27295 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 01:03:00.926200   27295 main.go:141] libmachine: Making call to close driver server
I0729 01:03:00.926221   27295 main.go:141] libmachine: (functional-512161) Calling .Close
I0729 01:03:00.926510   27295 main.go:141] libmachine: Successfully made call to close driver server
I0729 01:03:00.926530   27295 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 01:03:00.926540   27295 main.go:141] libmachine: Making call to close driver server
I0729 01:03:00.926548   27295 main.go:141] libmachine: (functional-512161) Calling .Close
I0729 01:03:00.926546   27295 main.go:141] libmachine: (functional-512161) DBG | Closing plugin on server side
I0729 01:03:00.926780   27295 main.go:141] libmachine: Successfully made call to close driver server
I0729 01:03:00.926793   27295 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-512161 image ls --format json --alsologtostderr:
[{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a
3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a05d8f87b28d54068cb4bcd910122af2b4045a64998f43ccf6f42c905fdd4fef","repoDigests":["localhost/minikube-local-cache-test@sha256:26c5991756cbed0cb7d6fff76e43dc85401268abfc9f42d0dbfb2663a6c534a7"],"repoTags":["localhost/minikube-local-cache-test:functional-512161"],"size":"3330"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"5107333e08a87b836d4
8ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-pr
oxy:v1.30.3"],"size":"85953945"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e4895
0f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/
kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-512161"],"size":"4943877"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.
k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-512161 image ls --format json --alsologtostderr:
I0729 01:03:00.251322   27271 out.go:291] Setting OutFile to fd 1 ...
I0729 01:03:00.251437   27271 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 01:03:00.251445   27271 out.go:304] Setting ErrFile to fd 2...
I0729 01:03:00.251449   27271 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 01:03:00.251653   27271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
I0729 01:03:00.252178   27271 config.go:182] Loaded profile config "functional-512161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 01:03:00.252273   27271 config.go:182] Loaded profile config "functional-512161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 01:03:00.252620   27271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 01:03:00.252663   27271 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 01:03:00.267713   27271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42053
I0729 01:03:00.268238   27271 main.go:141] libmachine: () Calling .GetVersion
I0729 01:03:00.268840   27271 main.go:141] libmachine: Using API Version  1
I0729 01:03:00.268861   27271 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 01:03:00.269287   27271 main.go:141] libmachine: () Calling .GetMachineName
I0729 01:03:00.269536   27271 main.go:141] libmachine: (functional-512161) Calling .GetState
I0729 01:03:00.271634   27271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 01:03:00.271677   27271 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 01:03:00.287355   27271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
I0729 01:03:00.287823   27271 main.go:141] libmachine: () Calling .GetVersion
I0729 01:03:00.288554   27271 main.go:141] libmachine: Using API Version  1
I0729 01:03:00.288581   27271 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 01:03:00.288933   27271 main.go:141] libmachine: () Calling .GetMachineName
I0729 01:03:00.289134   27271 main.go:141] libmachine: (functional-512161) Calling .DriverName
I0729 01:03:00.289339   27271 ssh_runner.go:195] Run: systemctl --version
I0729 01:03:00.289366   27271 main.go:141] libmachine: (functional-512161) Calling .GetSSHHostname
I0729 01:03:00.292155   27271 main.go:141] libmachine: (functional-512161) DBG | domain functional-512161 has defined MAC address 52:54:00:29:02:a6 in network mk-functional-512161
I0729 01:03:00.292583   27271 main.go:141] libmachine: (functional-512161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:02:a6", ip: ""} in network mk-functional-512161: {Iface:virbr1 ExpiryTime:2024-07-29 02:00:11 +0000 UTC Type:0 Mac:52:54:00:29:02:a6 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:functional-512161 Clientid:01:52:54:00:29:02:a6}
I0729 01:03:00.292611   27271 main.go:141] libmachine: (functional-512161) DBG | domain functional-512161 has defined IP address 192.168.39.145 and MAC address 52:54:00:29:02:a6 in network mk-functional-512161
I0729 01:03:00.292774   27271 main.go:141] libmachine: (functional-512161) Calling .GetSSHPort
I0729 01:03:00.292922   27271 main.go:141] libmachine: (functional-512161) Calling .GetSSHKeyPath
I0729 01:03:00.293076   27271 main.go:141] libmachine: (functional-512161) Calling .GetSSHUsername
I0729 01:03:00.293200   27271 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/functional-512161/id_rsa Username:docker}
I0729 01:03:00.398805   27271 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 01:03:00.445739   27271 main.go:141] libmachine: Making call to close driver server
I0729 01:03:00.445752   27271 main.go:141] libmachine: (functional-512161) Calling .Close
I0729 01:03:00.446047   27271 main.go:141] libmachine: Successfully made call to close driver server
I0729 01:03:00.446068   27271 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 01:03:00.446096   27271 main.go:141] libmachine: (functional-512161) DBG | Closing plugin on server side
I0729 01:03:00.446098   27271 main.go:141] libmachine: Making call to close driver server
I0729 01:03:00.446132   27271 main.go:141] libmachine: (functional-512161) Calling .Close
I0729 01:03:00.446372   27271 main.go:141] libmachine: Successfully made call to close driver server
I0729 01:03:00.446395   27271 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-512161 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-512161
size: "4943877"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a05d8f87b28d54068cb4bcd910122af2b4045a64998f43ccf6f42c905fdd4fef
repoDigests:
- localhost/minikube-local-cache-test@sha256:26c5991756cbed0cb7d6fff76e43dc85401268abfc9f42d0dbfb2663a6c534a7
repoTags:
- localhost/minikube-local-cache-test:functional-512161
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-512161 image ls --format yaml --alsologtostderr:
I0729 01:02:58.672206   27071 out.go:291] Setting OutFile to fd 1 ...
I0729 01:02:58.672434   27071 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 01:02:58.672444   27071 out.go:304] Setting ErrFile to fd 2...
I0729 01:02:58.672449   27071 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 01:02:58.672603   27071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
I0729 01:02:58.673174   27071 config.go:182] Loaded profile config "functional-512161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 01:02:58.673273   27071 config.go:182] Loaded profile config "functional-512161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 01:02:58.673692   27071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 01:02:58.673743   27071 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 01:02:58.689364   27071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46445
I0729 01:02:58.689904   27071 main.go:141] libmachine: () Calling .GetVersion
I0729 01:02:58.690478   27071 main.go:141] libmachine: Using API Version  1
I0729 01:02:58.690503   27071 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 01:02:58.690880   27071 main.go:141] libmachine: () Calling .GetMachineName
I0729 01:02:58.691119   27071 main.go:141] libmachine: (functional-512161) Calling .GetState
I0729 01:02:58.693337   27071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 01:02:58.693392   27071 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 01:02:58.708337   27071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
I0729 01:02:58.708725   27071 main.go:141] libmachine: () Calling .GetVersion
I0729 01:02:58.709187   27071 main.go:141] libmachine: Using API Version  1
I0729 01:02:58.709221   27071 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 01:02:58.709567   27071 main.go:141] libmachine: () Calling .GetMachineName
I0729 01:02:58.709768   27071 main.go:141] libmachine: (functional-512161) Calling .DriverName
I0729 01:02:58.710003   27071 ssh_runner.go:195] Run: systemctl --version
I0729 01:02:58.710045   27071 main.go:141] libmachine: (functional-512161) Calling .GetSSHHostname
I0729 01:02:58.713282   27071 main.go:141] libmachine: (functional-512161) DBG | domain functional-512161 has defined MAC address 52:54:00:29:02:a6 in network mk-functional-512161
I0729 01:02:58.713730   27071 main.go:141] libmachine: (functional-512161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:02:a6", ip: ""} in network mk-functional-512161: {Iface:virbr1 ExpiryTime:2024-07-29 02:00:11 +0000 UTC Type:0 Mac:52:54:00:29:02:a6 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:functional-512161 Clientid:01:52:54:00:29:02:a6}
I0729 01:02:58.713763   27071 main.go:141] libmachine: (functional-512161) DBG | domain functional-512161 has defined IP address 192.168.39.145 and MAC address 52:54:00:29:02:a6 in network mk-functional-512161
I0729 01:02:58.713884   27071 main.go:141] libmachine: (functional-512161) Calling .GetSSHPort
I0729 01:02:58.714050   27071 main.go:141] libmachine: (functional-512161) Calling .GetSSHKeyPath
I0729 01:02:58.714186   27071 main.go:141] libmachine: (functional-512161) Calling .GetSSHUsername
I0729 01:02:58.714344   27071 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/functional-512161/id_rsa Username:docker}
I0729 01:02:58.830672   27071 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 01:02:58.926782   27071 main.go:141] libmachine: Making call to close driver server
I0729 01:02:58.926798   27071 main.go:141] libmachine: (functional-512161) Calling .Close
I0729 01:02:58.927107   27071 main.go:141] libmachine: Successfully made call to close driver server
I0729 01:02:58.927128   27071 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 01:02:58.927140   27071 main.go:141] libmachine: Making call to close driver server
I0729 01:02:58.927149   27071 main.go:141] libmachine: (functional-512161) Calling .Close
I0729 01:02:58.927371   27071 main.go:141] libmachine: Successfully made call to close driver server
I0729 01:02:58.927387   27071 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 01:02:58.927493   27071 main.go:141] libmachine: (functional-512161) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-512161 ssh pgrep buildkitd: exit status 1 (198.544591ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image build -t localhost/my-image:functional-512161 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-512161 image build -t localhost/my-image:functional-512161 testdata/build --alsologtostderr: (5.575870515s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-512161 image build -t localhost/my-image:functional-512161 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 35c0b7084e0
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-512161
--> 5fb7468844c
Successfully tagged localhost/my-image:functional-512161
5fb7468844c754e4360ea0cc215c16ab6e25faa275c79a7bd39f633ff3d44a5f
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-512161 image build -t localhost/my-image:functional-512161 testdata/build --alsologtostderr:
I0729 01:02:59.173523   27140 out.go:291] Setting OutFile to fd 1 ...
I0729 01:02:59.173685   27140 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 01:02:59.173699   27140 out.go:304] Setting ErrFile to fd 2...
I0729 01:02:59.173708   27140 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 01:02:59.174009   27140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
I0729 01:02:59.174819   27140 config.go:182] Loaded profile config "functional-512161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 01:02:59.175421   27140 config.go:182] Loaded profile config "functional-512161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 01:02:59.175949   27140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 01:02:59.176021   27140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 01:02:59.191696   27140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44781
I0729 01:02:59.192140   27140 main.go:141] libmachine: () Calling .GetVersion
I0729 01:02:59.192784   27140 main.go:141] libmachine: Using API Version  1
I0729 01:02:59.192807   27140 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 01:02:59.193168   27140 main.go:141] libmachine: () Calling .GetMachineName
I0729 01:02:59.193341   27140 main.go:141] libmachine: (functional-512161) Calling .GetState
I0729 01:02:59.195208   27140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 01:02:59.195250   27140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 01:02:59.210736   27140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43015
I0729 01:02:59.211192   27140 main.go:141] libmachine: () Calling .GetVersion
I0729 01:02:59.211739   27140 main.go:141] libmachine: Using API Version  1
I0729 01:02:59.211759   27140 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 01:02:59.212041   27140 main.go:141] libmachine: () Calling .GetMachineName
I0729 01:02:59.212169   27140 main.go:141] libmachine: (functional-512161) Calling .DriverName
I0729 01:02:59.212365   27140 ssh_runner.go:195] Run: systemctl --version
I0729 01:02:59.212398   27140 main.go:141] libmachine: (functional-512161) Calling .GetSSHHostname
I0729 01:02:59.215012   27140 main.go:141] libmachine: (functional-512161) DBG | domain functional-512161 has defined MAC address 52:54:00:29:02:a6 in network mk-functional-512161
I0729 01:02:59.215345   27140 main.go:141] libmachine: (functional-512161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:02:a6", ip: ""} in network mk-functional-512161: {Iface:virbr1 ExpiryTime:2024-07-29 02:00:11 +0000 UTC Type:0 Mac:52:54:00:29:02:a6 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:functional-512161 Clientid:01:52:54:00:29:02:a6}
I0729 01:02:59.215417   27140 main.go:141] libmachine: (functional-512161) DBG | domain functional-512161 has defined IP address 192.168.39.145 and MAC address 52:54:00:29:02:a6 in network mk-functional-512161
I0729 01:02:59.215590   27140 main.go:141] libmachine: (functional-512161) Calling .GetSSHPort
I0729 01:02:59.215782   27140 main.go:141] libmachine: (functional-512161) Calling .GetSSHKeyPath
I0729 01:02:59.216024   27140 main.go:141] libmachine: (functional-512161) Calling .GetSSHUsername
I0729 01:02:59.216186   27140 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/functional-512161/id_rsa Username:docker}
I0729 01:02:59.327686   27140 build_images.go:161] Building image from path: /tmp/build.223281785.tar
I0729 01:02:59.327748   27140 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 01:02:59.362489   27140 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.223281785.tar
I0729 01:02:59.372437   27140 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.223281785.tar: stat -c "%s %y" /var/lib/minikube/build/build.223281785.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.223281785.tar': No such file or directory
I0729 01:02:59.372473   27140 ssh_runner.go:362] scp /tmp/build.223281785.tar --> /var/lib/minikube/build/build.223281785.tar (3072 bytes)
I0729 01:02:59.429335   27140 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.223281785
I0729 01:02:59.442087   27140 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.223281785 -xf /var/lib/minikube/build/build.223281785.tar
I0729 01:02:59.465550   27140 crio.go:315] Building image: /var/lib/minikube/build/build.223281785
I0729 01:02:59.465625   27140 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-512161 /var/lib/minikube/build/build.223281785 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0729 01:03:04.614108   27140 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-512161 /var/lib/minikube/build/build.223281785 --cgroup-manager=cgroupfs: (5.148455672s)
I0729 01:03:04.614178   27140 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.223281785
I0729 01:03:04.642563   27140 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.223281785.tar
I0729 01:03:04.701045   27140 build_images.go:217] Built localhost/my-image:functional-512161 from /tmp/build.223281785.tar
I0729 01:03:04.701086   27140 build_images.go:133] succeeded building to: functional-512161
I0729 01:03:04.701091   27140 build_images.go:134] failed building to: 
I0729 01:03:04.701113   27140 main.go:141] libmachine: Making call to close driver server
I0729 01:03:04.701126   27140 main.go:141] libmachine: (functional-512161) Calling .Close
I0729 01:03:04.701380   27140 main.go:141] libmachine: Successfully made call to close driver server
I0729 01:03:04.701394   27140 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 01:03:04.701403   27140 main.go:141] libmachine: Making call to close driver server
I0729 01:03:04.701409   27140 main.go:141] libmachine: (functional-512161) Calling .Close
I0729 01:03:04.701411   27140 main.go:141] libmachine: (functional-512161) DBG | Closing plugin on server side
I0729 01:03:04.701623   27140 main.go:141] libmachine: Successfully made call to close driver server
I0729 01:03:04.701643   27140 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.947190801s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-512161
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image load --daemon kicbase/echo-server:functional-512161 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-512161 image load --daemon kicbase/echo-server:functional-512161 --alsologtostderr: (1.127306002s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (18.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-512161 /tmp/TestFunctionalparallelMountCmdany-port1202180313/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722214944303381782" to /tmp/TestFunctionalparallelMountCmdany-port1202180313/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722214944303381782" to /tmp/TestFunctionalparallelMountCmdany-port1202180313/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722214944303381782" to /tmp/TestFunctionalparallelMountCmdany-port1202180313/001/test-1722214944303381782
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-512161 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (200.940401ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 01:02 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 01:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 01:02 test-1722214944303381782
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh cat /mount-9p/test-1722214944303381782
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-512161 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [92fd4b28-8ffc-4d79-91d6-a0d6cf3d4f45] Pending
helpers_test.go:344: "busybox-mount" [92fd4b28-8ffc-4d79-91d6-a0d6cf3d4f45] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [92fd4b28-8ffc-4d79-91d6-a0d6cf3d4f45] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [92fd4b28-8ffc-4d79-91d6-a0d6cf3d4f45] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 16.00483531s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-512161 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-512161 /tmp/TestFunctionalparallelMountCmdany-port1202180313/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (18.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image load --daemon kicbase/echo-server:functional-512161 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-512161
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image load --daemon kicbase/echo-server:functional-512161 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image save kicbase/echo-server:functional-512161 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-512161 image save kicbase/echo-server:functional-512161 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (8.052000882s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image rm kicbase/echo-server:functional-512161 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-512161
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 image save --daemon kicbase/echo-server:functional-512161 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-512161
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-512161 /tmp/TestFunctionalparallelMountCmdspecific-port3841649873/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-512161 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (251.392025ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-512161 /tmp/TestFunctionalparallelMountCmdspecific-port3841649873/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-512161 ssh "sudo umount -f /mount-9p": exit status 1 (234.546689ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-512161 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-512161 /tmp/TestFunctionalparallelMountCmdspecific-port3841649873/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-512161 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1092024579/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-512161 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1092024579/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-512161 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1092024579/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-512161 ssh "findmnt -T" /mount1: exit status 1 (334.711826ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-512161 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-512161 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1092024579/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-512161 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1092024579/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-512161 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1092024579/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-512161 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-512161 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-n7z9f" [313c1a1f-6286-4e63-af2b-0bde69e32f22] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-n7z9f" [313c1a1f-6286-4e63-af2b-0bde69e32f22] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.008107146s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "273.737248ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "52.664783ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "299.180343ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "42.78524ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 service list
functional_test.go:1459: (dbg) Done: out/minikube-linux-amd64 -p functional-512161 service list: (1.241196439s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-amd64 -p functional-512161 service list -o json: (1.284066218s)
functional_test.go:1494: Took "1.284172705s" to run "out/minikube-linux-amd64 -p functional-512161 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.145:30241
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-512161 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.145:30241
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-512161
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-512161
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-512161
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (225.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-845088 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 01:04:11.059604   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 01:06:27.215243   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 01:06:54.900097   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-845088 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m44.615058085s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (225.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-845088 -- rollout status deployment/busybox: (4.568586051s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- exec busybox-fc5497c4f-dbfgn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- exec busybox-fc5497c4f-kdxhf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- exec busybox-fc5497c4f-wvsr6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- exec busybox-fc5497c4f-dbfgn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- exec busybox-fc5497c4f-kdxhf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- exec busybox-fc5497c4f-wvsr6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- exec busybox-fc5497c4f-dbfgn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- exec busybox-fc5497c4f-kdxhf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- exec busybox-fc5497c4f-wvsr6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- exec busybox-fc5497c4f-dbfgn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- exec busybox-fc5497c4f-dbfgn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- exec busybox-fc5497c4f-kdxhf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- exec busybox-fc5497c4f-kdxhf -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- exec busybox-fc5497c4f-wvsr6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-845088 -- exec busybox-fc5497c4f-wvsr6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-845088 -v=7 --alsologtostderr
E0729 01:07:23.072181   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
E0729 01:07:23.077433   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
E0729 01:07:23.087687   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
E0729 01:07:23.107963   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
E0729 01:07:23.148061   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
E0729 01:07:23.228339   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
E0729 01:07:23.388770   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
E0729 01:07:23.709390   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
E0729 01:07:24.349800   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
E0729 01:07:25.630135   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
E0729 01:07:28.190511   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
E0729 01:07:33.311406   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
E0729 01:07:43.552376   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-845088 -v=7 --alsologtostderr: (58.332479427s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr
E0729 01:08:04.032714   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-845088 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp testdata/cp-test.txt ha-845088:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp ha-845088:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2637143725/001/cp-test_ha-845088.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp ha-845088:/home/docker/cp-test.txt ha-845088-m02:/home/docker/cp-test_ha-845088_ha-845088-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m02 "sudo cat /home/docker/cp-test_ha-845088_ha-845088-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp ha-845088:/home/docker/cp-test.txt ha-845088-m03:/home/docker/cp-test_ha-845088_ha-845088-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m03 "sudo cat /home/docker/cp-test_ha-845088_ha-845088-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp ha-845088:/home/docker/cp-test.txt ha-845088-m04:/home/docker/cp-test_ha-845088_ha-845088-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m04 "sudo cat /home/docker/cp-test_ha-845088_ha-845088-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp testdata/cp-test.txt ha-845088-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp ha-845088-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2637143725/001/cp-test_ha-845088-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp ha-845088-m02:/home/docker/cp-test.txt ha-845088:/home/docker/cp-test_ha-845088-m02_ha-845088.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088 "sudo cat /home/docker/cp-test_ha-845088-m02_ha-845088.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp ha-845088-m02:/home/docker/cp-test.txt ha-845088-m03:/home/docker/cp-test_ha-845088-m02_ha-845088-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m03 "sudo cat /home/docker/cp-test_ha-845088-m02_ha-845088-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp ha-845088-m02:/home/docker/cp-test.txt ha-845088-m04:/home/docker/cp-test_ha-845088-m02_ha-845088-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m04 "sudo cat /home/docker/cp-test_ha-845088-m02_ha-845088-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp testdata/cp-test.txt ha-845088-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp ha-845088-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2637143725/001/cp-test_ha-845088-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp ha-845088-m03:/home/docker/cp-test.txt ha-845088:/home/docker/cp-test_ha-845088-m03_ha-845088.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088 "sudo cat /home/docker/cp-test_ha-845088-m03_ha-845088.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp ha-845088-m03:/home/docker/cp-test.txt ha-845088-m02:/home/docker/cp-test_ha-845088-m03_ha-845088-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m02 "sudo cat /home/docker/cp-test_ha-845088-m03_ha-845088-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp ha-845088-m03:/home/docker/cp-test.txt ha-845088-m04:/home/docker/cp-test_ha-845088-m03_ha-845088-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m04 "sudo cat /home/docker/cp-test_ha-845088-m03_ha-845088-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp testdata/cp-test.txt ha-845088-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2637143725/001/cp-test_ha-845088-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt ha-845088:/home/docker/cp-test_ha-845088-m04_ha-845088.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088 "sudo cat /home/docker/cp-test_ha-845088-m04_ha-845088.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt ha-845088-m02:/home/docker/cp-test_ha-845088-m04_ha-845088-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m02 "sudo cat /home/docker/cp-test_ha-845088-m04_ha-845088-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 cp ha-845088-m04:/home/docker/cp-test.txt ha-845088-m03:/home/docker/cp-test_ha-845088-m04_ha-845088-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 ssh -n ha-845088-m03 "sudo cat /home/docker/cp-test_ha-845088-m04_ha-845088-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.468581852s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 node delete m03 -v=7 --alsologtostderr
E0729 01:17:50.261275   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-845088 node delete m03 -v=7 --alsologtostderr: (17.395098305s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (289.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-845088 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 01:21:27.215891   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 01:22:23.071243   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
E0729 01:23:46.115280   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-845088 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m48.698534103s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (289.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-845088 --control-plane -v=7 --alsologtostderr
E0729 01:26:27.215470   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-845088 --control-plane -v=7 --alsologtostderr: (1m19.153209034s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-845088 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (95.43s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-168889 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0729 01:27:23.070884   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-168889 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m35.42828392s)
--- PASS: TestJSONOutput/start/Command (95.43s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-168889 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-168889 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.39s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-168889 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-168889 --output=json --user=testUser: (7.387901361s)
--- PASS: TestJSONOutput/stop/Command (7.39s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-085290 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-085290 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (56.891059ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c16465ac-f0a2-4e5b-a3ac-aed7e94cc4e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-085290] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"774d9e69-2cce-4fb0-8a76-de18dcb41c6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"ae1a2c5c-f82a-49ba-8bda-0416c9194c78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f7663dd5-fa48-44c0-af19-a700227fd8c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig"}}
	{"specversion":"1.0","id":"cd32eb78-9f20-4f11-9ce7-15841069f82d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube"}}
	{"specversion":"1.0","id":"0518402e-bba3-4b2b-8171-f728061983a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bc212d79-2fc2-445f-94c6-282adccf9689","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"aa7a86a4-08de-4d45-b858-6b87af705905","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-085290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-085290
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (87.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-688751 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-688751 --driver=kvm2  --container-runtime=crio: (42.512950601s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-691355 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-691355 --driver=kvm2  --container-runtime=crio: (42.340919754s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-688751
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-691355
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-691355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-691355
helpers_test.go:175: Cleaning up "first-688751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-688751
--- PASS: TestMinikubeProfile (87.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-120271 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-120271 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.993238028s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-120271 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-120271 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-132990 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-132990 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.197959717s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-132990 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-132990 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-120271 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-132990 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-132990 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-132990
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-132990: (1.265882339s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.5s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-132990
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-132990: (21.496411706s)
--- PASS: TestMountStart/serial/RestartStopped (22.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-132990 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-132990 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (125.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-060411 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 01:31:27.215796   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
E0729 01:32:23.071199   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-060411 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m4.630293926s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (125.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060411 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060411 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-060411 -- rollout status deployment/busybox: (3.862398364s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060411 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060411 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060411 -- exec busybox-fc5497c4f-lfmwp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060411 -- exec busybox-fc5497c4f-t65gs -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060411 -- exec busybox-fc5497c4f-lfmwp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060411 -- exec busybox-fc5497c4f-t65gs -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060411 -- exec busybox-fc5497c4f-lfmwp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060411 -- exec busybox-fc5497c4f-t65gs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060411 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060411 -- exec busybox-fc5497c4f-lfmwp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060411 -- exec busybox-fc5497c4f-lfmwp -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060411 -- exec busybox-fc5497c4f-t65gs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060411 -- exec busybox-fc5497c4f-t65gs -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-060411 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-060411 -v 3 --alsologtostderr: (49.006309119s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.54s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-060411 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 cp testdata/cp-test.txt multinode-060411:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 cp multinode-060411:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile705326141/001/cp-test_multinode-060411.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 cp multinode-060411:/home/docker/cp-test.txt multinode-060411-m02:/home/docker/cp-test_multinode-060411_multinode-060411-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411-m02 "sudo cat /home/docker/cp-test_multinode-060411_multinode-060411-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 cp multinode-060411:/home/docker/cp-test.txt multinode-060411-m03:/home/docker/cp-test_multinode-060411_multinode-060411-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411-m03 "sudo cat /home/docker/cp-test_multinode-060411_multinode-060411-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 cp testdata/cp-test.txt multinode-060411-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 cp multinode-060411-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile705326141/001/cp-test_multinode-060411-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 cp multinode-060411-m02:/home/docker/cp-test.txt multinode-060411:/home/docker/cp-test_multinode-060411-m02_multinode-060411.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411 "sudo cat /home/docker/cp-test_multinode-060411-m02_multinode-060411.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 cp multinode-060411-m02:/home/docker/cp-test.txt multinode-060411-m03:/home/docker/cp-test_multinode-060411-m02_multinode-060411-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411-m03 "sudo cat /home/docker/cp-test_multinode-060411-m02_multinode-060411-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 cp testdata/cp-test.txt multinode-060411-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 cp multinode-060411-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile705326141/001/cp-test_multinode-060411-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 cp multinode-060411-m03:/home/docker/cp-test.txt multinode-060411:/home/docker/cp-test_multinode-060411-m03_multinode-060411.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411 "sudo cat /home/docker/cp-test_multinode-060411-m03_multinode-060411.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 cp multinode-060411-m03:/home/docker/cp-test.txt multinode-060411-m02:/home/docker/cp-test_multinode-060411-m03_multinode-060411-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 ssh -n multinode-060411-m02 "sudo cat /home/docker/cp-test_multinode-060411-m03_multinode-060411-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-060411 node stop m03: (1.460982324s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-060411 status: exit status 7 (407.602507ms)

                                                
                                                
-- stdout --
	multinode-060411
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-060411-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-060411-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-060411 status --alsologtostderr: exit status 7 (402.643263ms)

                                                
                                                
-- stdout --
	multinode-060411
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-060411-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-060411-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:34:27.517641   45012 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:34:27.517914   45012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:34:27.517924   45012 out.go:304] Setting ErrFile to fd 2...
	I0729 01:34:27.517931   45012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:34:27.518112   45012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:34:27.518296   45012 out.go:298] Setting JSON to false
	I0729 01:34:27.518327   45012 mustload.go:65] Loading cluster: multinode-060411
	I0729 01:34:27.518431   45012 notify.go:220] Checking for updates...
	I0729 01:34:27.518736   45012 config.go:182] Loaded profile config "multinode-060411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:34:27.518755   45012 status.go:255] checking status of multinode-060411 ...
	I0729 01:34:27.519223   45012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:34:27.519296   45012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:34:27.537703   45012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36005
	I0729 01:34:27.538146   45012 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:34:27.538642   45012 main.go:141] libmachine: Using API Version  1
	I0729 01:34:27.538669   45012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:34:27.539121   45012 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:34:27.539344   45012 main.go:141] libmachine: (multinode-060411) Calling .GetState
	I0729 01:34:27.541046   45012 status.go:330] multinode-060411 host status = "Running" (err=<nil>)
	I0729 01:34:27.541063   45012 host.go:66] Checking if "multinode-060411" exists ...
	I0729 01:34:27.541392   45012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:34:27.541459   45012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:34:27.556167   45012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40745
	I0729 01:34:27.556555   45012 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:34:27.556994   45012 main.go:141] libmachine: Using API Version  1
	I0729 01:34:27.557013   45012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:34:27.557283   45012 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:34:27.557462   45012 main.go:141] libmachine: (multinode-060411) Calling .GetIP
	I0729 01:34:27.560367   45012 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:34:27.560925   45012 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:34:27.560958   45012 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:34:27.561203   45012 host.go:66] Checking if "multinode-060411" exists ...
	I0729 01:34:27.561530   45012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:34:27.561582   45012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:34:27.576223   45012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34257
	I0729 01:34:27.576659   45012 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:34:27.577143   45012 main.go:141] libmachine: Using API Version  1
	I0729 01:34:27.577168   45012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:34:27.577466   45012 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:34:27.577646   45012 main.go:141] libmachine: (multinode-060411) Calling .DriverName
	I0729 01:34:27.577826   45012 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:34:27.577845   45012 main.go:141] libmachine: (multinode-060411) Calling .GetSSHHostname
	I0729 01:34:27.580543   45012 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:34:27.581003   45012 main.go:141] libmachine: (multinode-060411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:32:17", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:31:32 +0000 UTC Type:0 Mac:52:54:00:5b:32:17 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-060411 Clientid:01:52:54:00:5b:32:17}
	I0729 01:34:27.581030   45012 main.go:141] libmachine: (multinode-060411) DBG | domain multinode-060411 has defined IP address 192.168.39.140 and MAC address 52:54:00:5b:32:17 in network mk-multinode-060411
	I0729 01:34:27.581200   45012 main.go:141] libmachine: (multinode-060411) Calling .GetSSHPort
	I0729 01:34:27.581365   45012 main.go:141] libmachine: (multinode-060411) Calling .GetSSHKeyPath
	I0729 01:34:27.581498   45012 main.go:141] libmachine: (multinode-060411) Calling .GetSSHUsername
	I0729 01:34:27.581614   45012 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/multinode-060411/id_rsa Username:docker}
	I0729 01:34:27.658628   45012 ssh_runner.go:195] Run: systemctl --version
	I0729 01:34:27.664781   45012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:34:27.680972   45012 kubeconfig.go:125] found "multinode-060411" server: "https://192.168.39.140:8443"
	I0729 01:34:27.681006   45012 api_server.go:166] Checking apiserver status ...
	I0729 01:34:27.681050   45012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 01:34:27.695094   45012 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup
	W0729 01:34:27.704859   45012 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 01:34:27.704909   45012 ssh_runner.go:195] Run: ls
	I0729 01:34:27.709325   45012 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0729 01:34:27.713171   45012 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0729 01:34:27.713196   45012 status.go:422] multinode-060411 apiserver status = Running (err=<nil>)
	I0729 01:34:27.713207   45012 status.go:257] multinode-060411 status: &{Name:multinode-060411 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:34:27.713228   45012 status.go:255] checking status of multinode-060411-m02 ...
	I0729 01:34:27.713628   45012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:34:27.713668   45012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:34:27.728481   45012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38297
	I0729 01:34:27.728895   45012 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:34:27.729336   45012 main.go:141] libmachine: Using API Version  1
	I0729 01:34:27.729359   45012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:34:27.729665   45012 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:34:27.729845   45012 main.go:141] libmachine: (multinode-060411-m02) Calling .GetState
	I0729 01:34:27.731324   45012 status.go:330] multinode-060411-m02 host status = "Running" (err=<nil>)
	I0729 01:34:27.731344   45012 host.go:66] Checking if "multinode-060411-m02" exists ...
	I0729 01:34:27.731855   45012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:34:27.731898   45012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:34:27.746346   45012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I0729 01:34:27.746727   45012 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:34:27.747148   45012 main.go:141] libmachine: Using API Version  1
	I0729 01:34:27.747172   45012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:34:27.747446   45012 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:34:27.747630   45012 main.go:141] libmachine: (multinode-060411-m02) Calling .GetIP
	I0729 01:34:27.750239   45012 main.go:141] libmachine: (multinode-060411-m02) DBG | domain multinode-060411-m02 has defined MAC address 52:54:00:ba:bf:02 in network mk-multinode-060411
	I0729 01:34:27.750681   45012 main.go:141] libmachine: (multinode-060411-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:bf:02", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:32:46 +0000 UTC Type:0 Mac:52:54:00:ba:bf:02 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-060411-m02 Clientid:01:52:54:00:ba:bf:02}
	I0729 01:34:27.750707   45012 main.go:141] libmachine: (multinode-060411-m02) DBG | domain multinode-060411-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:ba:bf:02 in network mk-multinode-060411
	I0729 01:34:27.750835   45012 host.go:66] Checking if "multinode-060411-m02" exists ...
	I0729 01:34:27.751225   45012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:34:27.751267   45012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:34:27.765683   45012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39931
	I0729 01:34:27.766082   45012 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:34:27.766562   45012 main.go:141] libmachine: Using API Version  1
	I0729 01:34:27.766585   45012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:34:27.766874   45012 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:34:27.767132   45012 main.go:141] libmachine: (multinode-060411-m02) Calling .DriverName
	I0729 01:34:27.767314   45012 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 01:34:27.767334   45012 main.go:141] libmachine: (multinode-060411-m02) Calling .GetSSHHostname
	I0729 01:34:27.769643   45012 main.go:141] libmachine: (multinode-060411-m02) DBG | domain multinode-060411-m02 has defined MAC address 52:54:00:ba:bf:02 in network mk-multinode-060411
	I0729 01:34:27.770080   45012 main.go:141] libmachine: (multinode-060411-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:bf:02", ip: ""} in network mk-multinode-060411: {Iface:virbr1 ExpiryTime:2024-07-29 02:32:46 +0000 UTC Type:0 Mac:52:54:00:ba:bf:02 Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-060411-m02 Clientid:01:52:54:00:ba:bf:02}
	I0729 01:34:27.770102   45012 main.go:141] libmachine: (multinode-060411-m02) DBG | domain multinode-060411-m02 has defined IP address 192.168.39.233 and MAC address 52:54:00:ba:bf:02 in network mk-multinode-060411
	I0729 01:34:27.770208   45012 main.go:141] libmachine: (multinode-060411-m02) Calling .GetSSHPort
	I0729 01:34:27.770373   45012 main.go:141] libmachine: (multinode-060411-m02) Calling .GetSSHKeyPath
	I0729 01:34:27.770524   45012 main.go:141] libmachine: (multinode-060411-m02) Calling .GetSSHUsername
	I0729 01:34:27.770666   45012 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-9421/.minikube/machines/multinode-060411-m02/id_rsa Username:docker}
	I0729 01:34:27.846500   45012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 01:34:27.860735   45012 status.go:257] multinode-060411-m02 status: &{Name:multinode-060411-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0729 01:34:27.860772   45012 status.go:255] checking status of multinode-060411-m03 ...
	I0729 01:34:27.861152   45012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 01:34:27.861199   45012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 01:34:27.876413   45012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0729 01:34:27.876855   45012 main.go:141] libmachine: () Calling .GetVersion
	I0729 01:34:27.877286   45012 main.go:141] libmachine: Using API Version  1
	I0729 01:34:27.877311   45012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 01:34:27.877619   45012 main.go:141] libmachine: () Calling .GetMachineName
	I0729 01:34:27.877857   45012 main.go:141] libmachine: (multinode-060411-m03) Calling .GetState
	I0729 01:34:27.879244   45012 status.go:330] multinode-060411-m03 host status = "Stopped" (err=<nil>)
	I0729 01:34:27.879262   45012 status.go:343] host is not running, skipping remaining checks
	I0729 01:34:27.879270   45012 status.go:257] multinode-060411-m03 status: &{Name:multinode-060411-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 node start m03 -v=7 --alsologtostderr
E0729 01:34:30.262225   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-060411 node start m03 -v=7 --alsologtostderr: (38.967685748s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-060411 node delete m03: (1.805517935s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (181.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-060411 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-060411 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m0.687997064s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060411 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (181.20s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-060411
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-060411-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-060411-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.021775ms)

                                                
                                                
-- stdout --
	* [multinode-060411-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-060411-m02' is duplicated with machine name 'multinode-060411-m02' in profile 'multinode-060411'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-060411-m03 --driver=kvm2  --container-runtime=crio
E0729 01:46:27.215195   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-060411-m03 --driver=kvm2  --container-runtime=crio: (43.092488943s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-060411
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-060411: exit status 80 (208.909046ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-060411 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-060411-m03 already exists in multinode-060411-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-060411-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.18s)

                                                
                                    
x
+
TestScheduledStopUnix (115.43s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-944395 --memory=2048 --driver=kvm2  --container-runtime=crio
E0729 01:51:27.215746   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-944395 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.890202206s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-944395 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-944395 -n scheduled-stop-944395
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-944395 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-944395 --cancel-scheduled
E0729 01:52:23.071183   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-944395 -n scheduled-stop-944395
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-944395
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-944395 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-944395
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-944395: exit status 7 (64.611577ms)

                                                
                                                
-- stdout --
	scheduled-stop-944395
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-944395 -n scheduled-stop-944395
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-944395 -n scheduled-stop-944395: exit status 7 (62.918905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-944395" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-944395
--- PASS: TestScheduledStopUnix (115.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (230.23s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2690247327 start -p running-upgrade-713702 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2690247327 start -p running-upgrade-713702 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m16.151845116s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-713702 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-713702 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m29.369807734s)
helpers_test.go:175: Cleaning up "running-upgrade-713702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-713702
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-713702: (2.101943534s)
--- PASS: TestRunningBinaryUpgrade (230.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-703567 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-703567 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (76.583226ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-703567] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (92.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-703567 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-703567 --driver=kvm2  --container-runtime=crio: (1m32.755713062s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-703567 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (92.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-464146 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-464146 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (99.677848ms)

                                                
                                                
-- stdout --
	* [false-464146] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 01:54:01.755225   53396 out.go:291] Setting OutFile to fd 1 ...
	I0729 01:54:01.755455   53396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:54:01.755463   53396 out.go:304] Setting ErrFile to fd 2...
	I0729 01:54:01.755467   53396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 01:54:01.755674   53396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-9421/.minikube/bin
	I0729 01:54:01.756205   53396 out.go:298] Setting JSON to false
	I0729 01:54:01.757065   53396 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5788,"bootTime":1722212254,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 01:54:01.757127   53396 start.go:139] virtualization: kvm guest
	I0729 01:54:01.759552   53396 out.go:177] * [false-464146] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 01:54:01.760952   53396 notify.go:220] Checking for updates...
	I0729 01:54:01.760967   53396 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 01:54:01.762316   53396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 01:54:01.763677   53396 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-9421/kubeconfig
	I0729 01:54:01.764960   53396 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-9421/.minikube
	I0729 01:54:01.766160   53396 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 01:54:01.767432   53396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 01:54:01.769305   53396 config.go:182] Loaded profile config "NoKubernetes-703567": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:54:01.769451   53396 config.go:182] Loaded profile config "offline-crio-684076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 01:54:01.769644   53396 config.go:182] Loaded profile config "running-upgrade-713702": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0729 01:54:01.769763   53396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 01:54:01.808393   53396 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 01:54:01.809465   53396 start.go:297] selected driver: kvm2
	I0729 01:54:01.809482   53396 start.go:901] validating driver "kvm2" against <nil>
	I0729 01:54:01.809492   53396 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 01:54:01.811317   53396 out.go:177] 
	W0729 01:54:01.812444   53396 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0729 01:54:01.813557   53396 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-464146 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-464146

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-464146

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-464146

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-464146

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-464146

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-464146

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-464146

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-464146

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-464146

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-464146

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-464146

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-464146" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-464146" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-464146

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-464146"

                                                
                                                
----------------------- debugLogs end: false-464146 [took: 2.686703956s] --------------------------------
helpers_test.go:175: Cleaning up "false-464146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-464146
--- PASS: TestNetworkPlugins/group/false (2.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (71.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-703567 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-703567 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m9.864831601s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-703567 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-703567 status -o json: exit status 2 (214.823307ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-703567","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-703567
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (71.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-703567 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-703567 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.593271064s)
--- PASS: TestNoKubernetes/serial/Start (30.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-703567 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-703567 "sudo systemctl is-active --quiet service kubelet": exit status 1 (195.43093ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.257521162s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.163788674s)
--- PASS: TestNoKubernetes/serial/ProfileList (29.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-703567
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-703567: (2.067877824s)
--- PASS: TestNoKubernetes/serial/Stop (2.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (25.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-703567 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-703567 --driver=kvm2  --container-runtime=crio: (25.493368322s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (25.49s)

                                                
                                    
x
+
TestPause/serial/Start (114.45s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-112077 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0729 01:57:06.117644   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/functional-512161/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-112077 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m54.45071783s)
--- PASS: TestPause/serial/Start (114.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-703567 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-703567 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.788936ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (166.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2930018250 start -p stopped-upgrade-804241 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2930018250 start -p stopped-upgrade-804241 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m30.76237358s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2930018250 -p stopped-upgrade-804241 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2930018250 -p stopped-upgrade-804241 stop: (2.137753514s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-804241 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-804241 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m13.497653365s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (166.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (106.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-464146 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-464146 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m46.050079346s)
--- PASS: TestNetworkPlugins/group/auto/Start (106.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-464146 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-464146 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m19.032261185s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-804241
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (108.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-464146 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-464146 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m48.489543144s)
--- PASS: TestNetworkPlugins/group/calico/Start (108.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-464146 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-464146 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-g5w92" [eb213ac0-5f17-4674-baf3-862d7c3dae09] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-g5w92" [eb213ac0-5f17-4674-baf3-862d7c3dae09] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00431239s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-464146 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-464146 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-464146 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (83.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-464146 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-464146 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m23.628423877s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (83.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-zrl2s" [942db25b-008e-437c-afcd-90b3be69e8d7] Running
E0729 02:01:27.215615   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/addons-657805/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006384395s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-464146 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-464146 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-mt4nq" [42546469-64aa-4254-a690-de59857a58ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-mt4nq" [42546469-64aa-4254-a690-de59857a58ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003976196s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-464146 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-464146 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-464146 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (101.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-464146 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-464146 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m41.046631041s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (101.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dznrm" [638b74ff-8798-49fa-83d2-30d5acb9524d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005424314s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-464146 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-464146 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jqph5" [d8c00d37-9171-4d65-bda9-1281f08e532d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jqph5" [d8c00d37-9171-4d65-bda9-1281f08e532d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004316142s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-464146 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-464146 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-464146 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-464146 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-464146 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bgq8s" [016c25b5-afea-4d43-8ed1-f170029b8241] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bgq8s" [016c25b5-afea-4d43-8ed1-f170029b8241] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004005142s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (82.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-464146 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-464146 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m22.832089026s)
--- PASS: TestNetworkPlugins/group/flannel/Start (82.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-464146 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-464146 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-464146 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (102.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-464146 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-464146 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m42.022686761s)
--- PASS: TestNetworkPlugins/group/bridge/Start (102.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-464146 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-464146 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-dmdvp" [50dfe088-2519-4f18-a4f6-bfdba04766cc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-dmdvp" [50dfe088-2519-4f18-a4f6-bfdba04766cc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004663227s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-464146 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-464146 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-464146 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-mwtc7" [57d65e07-68d2-40bd-b47f-85dc764fa09f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004701762s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-464146 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-464146 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7qdsq" [bbc98011-cf77-44d2-8563-f351cb6fc6f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7qdsq" [bbc98011-cf77-44d2-8563-f351cb6fc6f3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005028692s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-464146 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-464146 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-464146 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-464146 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-464146 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-k2tjb" [2925a0ac-bc75-4a04-bb38-b77e34231c03] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-k2tjb" [2925a0ac-bc75-4a04-bb38-b77e34231c03] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004257176s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-464146 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-464146 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-464146 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)
E0729 02:34:51.908902   16623 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-9421/.minikube/profiles/bridge-464146/client.crt: no such file or directory

                                                
                                    

Test skip (39/278)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
264 TestNetworkPlugins/group/kubenet 3.32
272 TestNetworkPlugins/group/cilium 3.7
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-464146 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-464146

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-464146

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-464146

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-464146

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-464146

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-464146

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-464146

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-464146

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-464146

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-464146

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-464146

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-464146" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-464146" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-464146

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-464146"

                                                
                                                
----------------------- debugLogs end: kubenet-464146 [took: 3.182664954s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-464146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-464146
--- SKIP: TestNetworkPlugins/group/kubenet (3.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-464146 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-464146

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-464146

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-464146

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-464146

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-464146

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-464146

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-464146

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-464146

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-464146

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-464146

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-464146

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-464146" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-464146

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-464146

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-464146

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-464146

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-464146" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-464146" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-464146

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-464146" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-464146"

                                                
                                                
----------------------- debugLogs end: cilium-464146 [took: 3.540944411s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-464146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-464146
--- SKIP: TestNetworkPlugins/group/cilium (3.70s)

                                                
                                    
Copied to clipboard